![]() MOVEMENT IMAGE DECODING DEVICE AND MOVEMENT IMAGE DECODING METHOD
专利摘要:
invention patent summary: "video encoding device, video encoding method, video encoding program, transmission device, transmission method and transmission program and video decoding device, video decoding method, video program video decoding, receiving device, receiving method and receiving program ". the present invention relates to a section for the derivation of interpretation information (104) which derives a candidate from interpretation information from the interpretation information of a forecast block adjacent to a forecast block to be coded or a block forecast present in the same position as or next to the forecast block to be coded in a coded image that differs temporarily from the forecast block to be coded. when the number of interpreter information candidates is less than a prescribed number of candidates, a valid integration candidate supplementation section (135) supplements interpreter information candidates that have the same forecast mode, benchmark and value of motion vector until the prescribed number of candidates is reached. 公开号:BR112014024294B1 申请号:R112014024294-1 申请日:2013-04-12 公开日:2020-02-11 发明作者:Hiroya Nakamura;Shigeru Fukushima;Hideki Takehara 申请人:JVC Kenwood Corporation; IPC主号:
专利说明:
Invention Patent Descriptive Report for MOVEMENT IMAGE DECODING DEVICE AND MOVEMENT IMAGE DECODING METHOD. [FIELD OF TECHNIQUE] [001] The present invention relates to a technique for encoding and decoding moving images and, more particularly, to a technique for encoding and decoding moving images using compensated motion prediction. [002] The MPEG-4 AVC / H.264 standard is a representative moving image compression encoding scheme. The MPEG-4 AVC / H.264 standard uses motion compensation in which an image is divided into a plurality of rectangular blocks, images that have been encoded or decoded are used as reference images and a movement is predicted from the images of reference. One method of predicting a movement based on this motion compensation is called an inter-forecast or compensated movement forecast. In the interpretation of the MPEG-4 AVC / H.264 standard, motion compensation is performed in such a way that a plurality of images can be used as reference images and a most likely reference image is selected in the respective blocks of the plurality of images of reference. In this way, a reference index is allocated to the respective reference images and the reference images are specified by the reference index. In B images, a maximum of two reference images can be selected from encoded or decoded reference images and used for interpretation. The forecast from these two reference images is classified into L0 forecast (list 0 forecast) which is used mainly as a later forecast and L1 forecast (list 1 forecast) which is used mainly as an earlier forecast Petition 870190021217, of 03/01/2019, p. 5/536 2/150 [003] Additionally, the double forecast that uses two modes of forecast prediction L0 and forecast L1 simultaneously is also defined. In the case of double forecasting, bidirectional forecasting is performed in order to obtain interpretation signals in L0 and L1 forecasting modes, which are multiplied by a weighting factor and are superimposed by adding a correction value to build a signal image of final interpretation. As the weighting factor and correction values used for weighted forecasting, a representative value for each reference image of lists is selected and encoded in the respective images. Interpretation-related coding information includes a forecast mode for classifying L0 forecast, L1 forecast and double forecast for each block, a reference index for specifying a reference image for each reference list for each block, and a motion vector that represents a direction of movement and the amount of movement of a block. These items of encoding information are encoded or decoded. [004] Additionally, in the MPEG-4 AVC / H.264 scheme, a straightforward way to construct encoding / decoding target block interpreter information from the interpreter information of an encoded / decoded block is defined. Since the direct mode does not require coding of interpreter information, the coding efficiency is improved. [005] A direct temporal mode that uses correlation of interprevision information in the temporal direction will be described with reference to FIG. 36. An image in which the L1 reference index is registered at 0 is called a colPic reference image. A block in the same position as an encoding / decoding target block in the colPic reference image is referred to as a reference block. [006] If a reference block is encoded with the Petition 870190021217, of 03/01/2019, p. 6/536 3/150 L0 prediction, a L0 motion vector from the reference block is referred to as a mvCol reference motion vector. If the reference block is not encoded using L0 prediction, but is encoded using L1 prediction, a reference block L1 motion vector is referred to as a mvCol reference motion vector. An image that the reference motion vector mvCol refers to is referred to as a L0 reference image in the direct temporal mode and the colPic reference image is named as a L1 reference image in the direct temporal mode. [007] A motion vector L0 mvL0 and a motion vector L1 mvL1 in the direct temporal mode are derived from the reference motion vector mvCol by performing a scaling process. [008] A POC of an L0 reference image in the direct temporal mode is subtracted from a POC of the colPic reference image to produce a td of image-by-image distance. A POC is a variable associated with an image that must be encoded and a value that is increased by 1 in the order of image emission / display is selected as the POC. A difference between the POCs of two images represents an image-by-image distance in a geometric time axis direction. [009] td = (POC reference image POC) - (P0 reference image L0 in direct temporal mode) [0010] The POC of a L0 reference image in direct temporal mode is subtracted from the POC of a target image encoding / decoding to produce a tb image-to-image distance. [0011] tb = (encoding / decoding target image POC) - (P0 reference image L0 in direct temporal mode) [0012] A motion vector L0 mvL0 in direct temporal mode is Petition 870190021217, of 03/01/2019, p. 7/536 4/150 derived from the reference motion vector mvCol by performing a scaling process. [0013] mvL0 = tb / td * mvCol [0014] A reference motion vector mvCol is subtracted from the motion vector L0 mvL0 in direct temporal mode to produce a motion vector L1 mvL1. [0015] mvL1 = mvL0 - mvCol [0016] When a moving image encoding device and a moving image decoding device have low processing capacity, the direct temporal mode process can be omitted. RELATED TECHNICAL LIST PATENT LITERATURE [0017] Patent Literature 1: JP 2004-129191 A DISCLOSURE OF THE INVENTION [0018] In this situation, the present inventors have recognized the need to compress more encoding information and reduce a total amount of encoding in a moving image encoding scheme that uses compensated motion prediction. [0019] The present invention was produced in view of this situation and an objective of the same is to provide a decoding and motion image encoding technique to reduce an amount of encoding of encoding information in order to improve the encoding efficiency by producing candidates for forecast information used in the forecast of compensated movement, according to a situation. [0020] In order to achieve the objective, a moving image encoding device, in accordance with an aspect of the present invention, is encodes moving images using compensated motion prediction in units of blocks obtained by Petition 870190021217, of 03/01/2019, p. 8/536 5/150 dividing each image into the moving images and the moving image encoding device includes: a forecast information encoding unit (110) that encodes information indicating a designated amount of information from interpreter candidates; a forecast information derivation unit (104) that derives information from interpreter forecast information from a forecast block next to a coding target forecast block or a forecast block present in the same position or next the coding target forecast block in an encoded image in a position temporarily different from the coding target forecast block; a candidate list building unit (130) that builds candidate list of interpreter information from derived interpreter candidate information; a supplementary candidate unit (135) that produces information from interpreter candidates whose forecast mode, benchmark and movement vector have predetermined values until the amount of interpreter candidate information included in the interpreter information candidate list reaches the designated amount of interpreter candidate information when the amount of interpreter candidate information included in the constructed interpreter candidate information list is less than the designated amount of interpreter candidate information and add the derived interpreter candidate information to the list candidate for derived interpretation information; and a compensated motion forecasting unit (105) that selects an interpreter information candidate from the interpreter candidate information included in the interpreter information candidate list and performs interpreting in the coding target forecast block using the candidate in Petition 870190021217, of 03/01/2019, p. 9/536 6/150 selected interpreter information. [0021] Another aspect of the present invention provides a moving image encoding device. The device is a moving image encoding device that encodes moving images using compensated motion prediction into units of blocks obtained by dividing each image into the moving images, including: a forecast information encoding unit (110) which encodes information that indicates a designated amount of information from interpretation candidates; a forecast information derivation unit (104) which produces interpreter forecast information from forecast forecast information from a forecast block next to a coding target forecast block or a forecast block present in the same position or near the coding target forecast block in an encoded image in a position temporarily different from the coding target forecast block; a candidate list building unit (130) that builds a candidate list of interpreter information from the derived interpreter candidate information; a candidate addition unit (134) that produces information from interpreter candidates whose prediction mode, benchmark and motion vector have predetermined values when the amount of interpreter candidate information included in the constructed interpreter information candidate list is less than the designated amount of interpreter candidate information and adds the derived interpreter candidate information to the constructed interpreter candidate information list and produces one or more interpreter candidate information in which at least one of the prediction mode, of the reference index and the movement vector is exchanged from this information from interprevision candidates who have the Petition 870190021217, of 03/01/2019, p. 10/536 7/150 predetermined value when the amount of interpreter candidate information included in the added interpreter candidate information list is less than the designated amount of interpreter candidate information and also add the derived interpreter candidate information to the candidate list added interpretation information; an additional candidate unit (135) that produces information from interpreter candidates whose prediction mode, benchmark and motion vector have predetermined values until the amount of interpreter candidate information included in the added interpreter information candidate list reaches the designated amount of interpreter candidate information when the amount of interpreter candidate information included in the added interpreter candidate information list is also less than the designated amount of interpreter interview information and also adds interpreter interview information derived from the candidate list of interpretation information added as well; and a compensated motion prediction unit (105) that selects an interpreter information candidate from the interpreter candidate information included in the interpreter information candidate list and performs interpreting in the coding target forecasting block using of the selected interpreter information candidate. [0022] Yet another aspect of the present invention provides a moving image encoding device. The device is a moving image encoding device that encodes a bit stream obtained by encoding moving images using motion prediction compensated into units of blocks obtained by dividing each image into the images in Petition 870190021217, of 03/01/2019, p. 11/536 8/150 movement, including: a forecast information encoding unit (110) that encodes information indicating a designated amount of information from interpreter candidates; a forecast information derivation unit (104) that produces interpreter forecast information from forecast forecast information from a forecast block next to a coding target forecast block or a forecast block present in the same position or near the coding target forecasting block in an encoded image in a temporarily different position from the coding target forecasting block; a candidate list building unit (130) that builds a candidate list of interpreter information from the derived interpreter candidate information; a supplementary candidate unit (135) that produces information from interpreter candidates whose prediction mode, benchmark and motion vector have predetermined values when the amount of interpreter candidate information included in the constructed interpreter information candidate list is less than the designated amount of interpreter candidate information and add the derived interpreter candidate information to the constructed interpreter candidate information list, producing one or more interpreter candidate information whose forecast mode and motion vector have the same value, and the benchmark is changed from that interpreter candidate information that has the default value when the amount of interpreter candidate information included in the added interpreter candidate list is less that the designated amount of interpreter candidate information and also add the derived interpreter candidate information to the list of added interpreter candidate information that produces Petition 870190021217, of 03/01/2019, p. 12/536 9/150 Interpretation Candidate Information whose Prediction Mode, Benchmark and Motion Vector have predetermined values until the amount of Interpretation Candidate Information included in the Interpretation Candidate List added also reaches the designated amount of Interpretation Information. Interpretation Candidates when the amount of Interpretation Candidate Information included in the Interpretation Candidate List added is also less than the designated amount of Interpretation Candidate Information and also adds the derived Interpretation Candidate information to the candidate list of interpretation information added as well; and a compensated motion forecasting unit (105) that selects an interpreter information candidate from the interpreter candidate information included in the interpreter information candidate list and performs interpreting in the coding target forecast block using the candidate of selected interpretation information. [0023] Yet another aspect of the present invention provides a moving image encoding device. The device is a moving image encoding device that encodes moving images using motion prediction compensated into units of blocks obtained by dividing each image into the moving images, including: a forecast information derivation unit (104) which stores and initializes a designated amount of predictor candidate information that has a prediction mode, benchmark, and predetermined motion vector in advance on a predictor candidate information list in which the designated amount of interpreter candidate information is stored and then produces information on interpretation candidates from Petition 870190021217, of 03/01/2019, p. 13/536 10/150 Interpretation information from a forecast block next to a coding target forecast block or a forecast block present in the same position or next to the coding target forecast block in an encoded image in a temporarily different position the coding target prediction block; and a compensated motion forecasting unit (105) that selects an interpreter information candidate from the interpreter candidate information included in the interpreter information candidate list and performs interpreting in the coding target forecast block using the candidate of selected interpretation information. [0024] Yet another aspect of the present invention provides a moving image encoding device. The device is a moving image encoding device that encodes moving images using compensated motion prediction into units of blocks obtained by dividing each image into the moving images, including: a forecast information encoding unit (110) which encodes information indicating a designated amount of information from interpreter candidates; a forecast information derivation unit (104) which produces interpreter forecast information based on the number of candidates designated as the interpreter forecast information amount from the forecast information of a forecast block next to a forecast block coding target forecasting or to a forecasting block present in the same position or close to the coding target forecasting block in an encoded image in a position temporarily different from the coding target forecasting block; a candidate list building unit (130) that builds a candidate list of interpreter information from interpreter candidate information Petition 870190021217, of 03/01/2019, p. 5/1536 11/150 derivatives; and a compensated movement forecasting unit (105) that selects an interpreter information candidate from interpreter candidate information included in the interpreter information candidate list when the designated number of candidates is greater than or equal to 1 and perform interpreter in the coding target forecast block using the selected interpreter information candidate and who performs interpreter of the coding target forecast block using interpreter information that has a predetermined value when the designated number of candidates is 0. [0025] Yet another aspect of the present invention provides a moving image encoding device. The device is a moving image encoding device that encodes moving images using motion prediction compensated into units of blocks obtained by dividing each image into the moving images, including: a forecast information derivation unit (104) which produces interpreter forecast information from interpreter forecast information from a forecast block next to a coding target forecast block or a forecast block present in the same position or next to the coding target forecast block in a encoded image in a position temporarily different from the encoding target prediction block; a supplementary candidate unit (135) that supplements information from interpreter candidates who have the same forecast mode, benchmark, and motion vector until the amount of interpreter candidate information reaches the designated amount of candidate when the amount of information interpreter candidates is less than the designated number of candidates; and a compensated movement forecasting unit (105) that selects a candidate for Petition 870190021217, of 03/01/2019, p. 5/1536 12/150 Interpretation Information from Interpretation Candidate Information and Perform Interpretation of the Coding Target Prediction Block Using the Interpretation Information Candidate Selected. [0026] Yet another aspect of the present invention is a method of encoding moving image. The method is a moving image encoding method for encoding moving images using motion prediction compensated into units of blocks obtained by dividing each image into the moving images, including: a step of encoding information from forecasting information from coding indicating a designated amount of information from interpretation candidates; a step of producing forecast information to produce the interpreter forecast information from the forecast information from a forecast block next to a coding target forecast block or to a forecast block present in the same position or next to the encoding target prediction block in an encoded image in a position temporarily different from the encoding target prediction block; a candidate list building step to build a candidate list of interpreter information from derived interpreter candidate information; a supplementary candidate step to produce information from interpreter candidates whose forecast mode, benchmark and movement vector have predetermined values until the amount of interpreter forecast information included in the interpreter forecast candidate list reaches the designated amount of Interpretation Candidate Information when the amount of Interpretation Candidate Information included in the constructed Interpretation Candidate List is less than the designated amount of Interpretation Information Petition 870190021217, of 03/01/2019, p. 5/1636 13/150 Interpretation Candidates and add the derived Interpretation Candidate information to the constructed Interpretation Candidate List and a compensated motion prediction step to select an interpreter information candidate from the interpreter candidate information included in the interpreter information candidate list and perform interpreter on the coding target forecast block using the interpreter information candidate. selected interprevision. [0027] Yet another aspect of the present invention provides a method of encoding moving image. The method is a moving image encoding method for decoding moving images using motion prediction compensated into units of blocks obtained by dividing each image into the moving images, including: a step of encoding quantity of information of forecasting of motion coding information indicating a designated amount of information from interpretation candidates; a step of producing forecast information to produce the interpreter forecast information from the forecast information from a forecast block next to a coding target forecast block or to a forecast block present in the same position or next to the encoding target prediction block in an encoded image in a position temporarily different from the encoding target prediction block; a candidate list building step to build a candidate list of interpreter information from derived interpreter candidate information; a candidate addition step to produce information on interpreter candidates whose forecast mode, benchmark and movement vector have predetermined values when the amount of interpreter candidate information included in the candidate list of Petition 870190021217, of 03/01/2019, p. 17/536 14/150 constructed interpreter information is less than the designated amount of interpreter candidate information and add the derived interpreter candidate information to the constructed interpreter information candidate list and produce one or more interpreter candidate information in which at least minus one of the forecasting mode, the benchmark and the movement vector is changed from this information from interpreter candidates who have the predetermined value when the amount of interpreter candidate information included in the interpreter information candidate list added is less than the designated amount of Interpretation Candidate Information and also add the derived Interpretation Candidate information to the Interpretation Candidate List added; a supplementary candidate step to produce information from interpreter candidates whose forecast mode, benchmark and movement vector have predetermined values until the amount of interpreter candidate information included in the interpreter information candidate list added also reaches the amount Interpretation Candidate Information Designation when the amount of Interpretation Candidate Information included in the Interpretation Candidate Information Added is also less than the Designated Interpretation Candidate Information and also adds the Interpretation Candidate Information derived to the candidate list of interpretation information added as well; and a compensated motion prediction step to select an interpreter information candidate from the interpreter candidate information included in the interpreter information candidate list and perform interprevision in the coding target forecast block using the Petition 870190021217, of 03/01/2019, p. 18/536 15/150 interpreter information candidate selected. [0028] Yet another aspect of the present invention provides a method of encoding moving image. The method is a moving image encoding method for encoding a bit stream obtained by encoding moving images using motion prediction compensated into units of blocks obtained by dividing each image among the moving images, including: a step of encoding quantity of forecast information from encoding information indicating a designated quantity of information from interpreter candidates; a step of producing forecast information to produce the interpreter forecast information from the forecast information from a forecast block next to a coding target forecast block or to a forecast block present in the same position or next to the encoding target prediction block in an encoded image in a position temporarily different from the encoding target prediction block; a candidate list building step to build a candidate list of interpreter information from derived interpreter candidate information; an additional candidate step to produce information from interpreter candidates whose forecast mode, benchmark and movement vector have predetermined values when the amount of interpreter candidate information included in the constructed interpreter information candidate list is less than the designated amount of interpreter candidate information and add derived interpreter candidate information to the constructed interpreter information candidate list, produce one or more interpreter candidate information whose forecast mode and motion vector have the same value and the benchmark is changed from Petition 870190021217, of 03/01/2019, p. 19/536 16/150 of that Interpretation Candidate Information that has a predetermined value when the amount of Interpretation Candidate Information included in the Interpretation Candidate Information Added is less than the designated amount of Interpretation Candidate Information and also adds the information of Interprevision Candidates Derived from the Interprevision Information Candidate List Added and Producing Interprevision Candidate Information whose Prediction Mode, Benchmark and Motion Vector have predetermined values up to the amount of Interpreter Candidate Information included in the candidate list Interpretation information added also reaches the designated amount of interpreter candidate information when the amount of interpreter candidate information included in the Interpreter information candidate list added is also less than the designated amount of interpreter candidate information and also adds the derived interpreter candidate information to the added interpreter information candidate list as well; and a compensated motion prediction step to select an interpreter information candidate from the interpreter candidate information included in the interpreter information candidate list and perform interpreter on the coding target forecast block using the interpreter information candidate. selected interprevision. [0029] Yet another aspect of the present invention provides a method of encoding moving image. The method is a moving image encoding method to encode moving images using motion prediction compensated in units of blocks obtained by dividing each image into the moving images, including: a production step of Petition 870190021217, of 03/01/2019, p. 5/2036 17/150 forecast information to store and initialize a designated amount of interpreter forecast information that has predetermined prediction mode, benchmark and motion vector in a candidate list of interpreter information in which designated amount of information Interpretation Candidate List is stored and then produce Interpretation Candidate Information from Interpretation Information from a forecast block next to a coding target forecast block or a forecast block present in the same position or next to the encoding target prediction block in an encoded image in a position temporarily different from the encoding target prediction block; and a compensated motion prediction step to select an interpreter information candidate from the interpreter candidate information included in the interpreter information candidate list and perform interpreter on the coding target forecast block using the interpreter information candidate. selected interprevision. [0030] Yet another aspect of the present invention provides a method of encoding moving image. The method is a moving image encoding method for encoding moving images using motion prediction compensated in units of blocks obtained by dividing each image into the moving images, including: a step of encoding quantity of information of prediction of motion coding information indicating a designated amount of information from interpretation candidates; a step of producing forecast information to produce information from interpreter candidates based on the amount of candidates designated as the amount of interpreter candidate information from Petition 870190021217, of 03/01/2019, p. 5/2136 18/150 interpretation of a forecast block next to a coding target forecast block or a forecast block present in the same position or next to the coding target forecast block in an encoded image in a position temporarily different from the block coding target prediction; a candidate list building step to build a candidate list of interpreter information from derived interpreter candidate information; and a compensated movement forecast step to select an interpreter information candidate from interpreter candidate information included in the interpreter information candidate list when the designated number of candidates is greater than or equal to 1 and perform interpreting in the block encoding target forecasting using the selected interpreter information candidate and performing interpreting the coding target forecasting block using interpreter information that has a predetermined value when the designated number of candidates is 0. [0031] Yet another aspect of the present invention provides a transmitter. The transmitter includes: a packet processor that packets a bit stream encoded according to a moving image encoding method to encode moving images using motion prediction compensated into units of blocks obtained by dividing each image between the moving images in order to obtain the packet bit stream; and a transmission unit that transmits the packet bit stream. The moving image coding method includes: a step of coding the amount of information from predicting coding information indicating a designated amount of information from candidate interpreters; a stage of producing forecast information to produce the information of candidates from Petition 870190021217, of 03/01/2019, p. 5/2236 19/150 Interpretation from Interpretation Information from a Prediction Block Next to a Coding Target Prediction Block or a Prediction Block Present in the Same Position or Next to the Coding Target Prediction Block in an Image Coded in a position temporarily different from the coding target prediction block; a candidate list building step to build a candidate list of interpreter information from derived interpreter candidate information; a candidate addition step to produce interpreter forecast information whose prediction mode, benchmark and motion vector have predetermined values when the amount of interpreter forecast information included in the constructed interpreter information candidate list is less than the designated amount of interpreter candidate information and add the derived interpreter candidate information to the constructed interpreter information candidate list, and produce one or more interpreter candidate information in which at least one of the prediction mode, the benchmark and movement vector is changed from this interpreter candidate information that has a predetermined value when the amount of interpreter candidate information included in the added interpreter information candidate list is smaller q u the designated amount of interpreter candidate information and also add the derived interpreter candidate information to the added interpreter information candidate list; a supplementary candidate step to produce information from interpreter candidates whose forecast mode, benchmark and movement vector have predetermined values up to the amount of interpreter candidate information included in the interpreter information candidate list Petition 870190021217, of 03/01/2019, p. 5/2336 20/150 added also reach the designated amount of interpreter candidate information when the amount of interpreter candidate information included in the added interpreter candidate information list is also less than the designated amount of interpreter candidate information and add also Interpretation candidate information derived from the Interpretation information candidate list added as well; and a compensated motion prediction step to select an interpreter information candidate from the interpreter candidate information included in the interpreter information candidate list and perform interpreter on the coding target forecast block using the interpreter information candidate. selected interprevision. [0032] Yet another aspect of the present invention provides a method of transmission. The method includes: a packet processing step to pack a bit stream encoded according to a moving image encoding method to encode moving images using compensated motion prediction into units of blocks obtained by dividing each image between the moving images to obtain the packet bit stream and a transmission step to transmit the packet bit stream. The moving image coding method includes: a step of coding the amount of information from predicting coding information indicating a designated amount of information from candidate interpreters; a step of producing forecast information to produce the interpreter forecast information from the forecast information from a forecast block next to a coding target forecast block or to a forecast block present in the same position or next to the prediction block of encoding target in an image encoded in a Petition 870190021217, of 03/01/2019, p. 5/2436 21/150 position temporarily different from the coding target forecast block; a candidate list building step to build a candidate list of interpreter information from derived interpreter candidate information; a candidate add step to produce interpreter forecast information whose prediction mode, benchmark and movement vector have predetermined values when the amount of interpreter forecast information included in the constructed interpreter information candidate list is less than the designated amount of interpreter candidate information and add the derived interpreter candidate information to the constructed interpreter information candidate list and produce one or more interpreter candidate information in which at least one of the prediction mode, the index reference and the motion vector is changed from this information from interpreter candidates that have the default value when the amount of interpreter candidate information included in the added interpreter information candidate list is less than q u the designated amount of interpreter candidate information and also add the derived interpreter candidate information to the added interpreter information candidate list; a supplementary candidate step to produce information from interpreter candidates whose forecast mode, benchmark and movement vector have predetermined values until the amount of interpreter candidate information included in the interpreter information candidate list added also reaches the amount Interpretation Candidate Information when the amount of Interpretation Candidate Information included in the Interpretation Candidate List added is also less than the Petition 870190021217, of 03/01/2019, p. 5/2536 22/150 designated amount of interpreter candidate information and also add derived interpreter candidate information to the added interpreter information candidate list as well; and a compensated motion prediction step to select an interpreter information candidate from the interpreter candidate information included in the interpreter information candidate list and perform interpreter on the coding target forecast block using the interpreter information candidate. selected interprevision. [0033] A moving image decoding device according to an aspect of the present invention is a moving image decoding device that decodes a bit stream obtained by encoding moving images using compensated motion prediction in units of blocks obtained by dividing each image into the moving images, including: a forecast information decoding unit (202) that decodes information indicating a previously designated amount of information from interpretation candidates; a forecast information derivation unit (205) that produces interpreter forecast information from forecast forecast information from a forecast block next to a decoding target forecast block or a forecast block present in the same position or near the decode target preview block in an image decoded in a position temporarily different from the decode target preview block; a candidate list building unit (230) that builds a candidate list of interpreter information from derived interpreter candidate information; an additional candidate unit (235) that produces information from interprevision candidates whose forecast mode, Petition 870190021217, of 03/01/2019, p. 26/536 23/150 reference and motion vector have predetermined values until the amount of interpreter candidate information included in the interpreter candidate candidate list reaches the previously designated amount of interpreter candidate information when the amount of interpreter candidate information included the constructed list of interpreter information candidate is less than the previously designated amount of interpreter candidate information and add the derived interpreter candidate information to the constructed interpreter information candidate list; and a compensated motion prediction unit (206) that selects an interpreter information candidate from the interpreter candidate information and performs interpreter on the decoding target forecast block using the selected interpreter information candidate. [0034] Another aspect of the present invention provides a moving image decoding device. The device is a moving image decoding device that decodes a bit stream obtained by encoding moving images using motion prediction compensated into units of blocks obtained by dividing each image into the moving images, including: a unit of motion decoding of forecast information (202) which decodes information indicating a previously designated amount of information from interpretation candidates; a forecast information derivation unit (205) that produces interpreter forecast information from forecast forecast information from a forecast block next to a decoding target forecast block or a forecast block present in the same position or close to the decode target forecast block in an image decoded in one position Petition 870190021217, of 03/01/2019, p. 27/536 24/150 temporarily different from the decode target forecast block; a candidate list building unit (230) that builds a candidate list of interpreter information from derived interpreter candidate information; a candidate addition unit (234) that produces information from interpreter candidates whose prediction mode, benchmark and motion vector have predetermined values when the amount of interpreter candidate information included in the constructed interpreter information candidate list is less than the designated amount of interpreter candidate information and adds the derived interpreter candidate information to the constructed interpreter candidate information list and produces one or more interpreter candidate information in which at least one of the prediction mode , the benchmark and the motion vector are changed from this information from interpreter candidates who have the default value when the amount of interpreter candidate information included in the interpreter information candidate list added is me nor that the designated amount of interpreter candidate information and also add the derived interpreter candidate information to the added interpreter interview candidate list; a supplementary candidate unit (235) that produces information from interpreter candidates whose prediction mode, benchmark and motion vector have predetermined values up to the amount of interpreter candidate information included in the interpreter information candidate list added as well achieve the previously designated amount of interpreter candidate information when the amount of interpreter candidate information included in the candidate list of Petition 870190021217, of 03/01/2019, p. 5/2836 25/150 added interprevision is also less than the previously designated amount of interprevision candidate information and also adds the derived interprevision information to the added interpreter information candidate list as well; and a compensated motion prediction unit (206) that selects an interpreter information candidate from the interpreter candidate information and performs interpreter on the decoding target forecast block using the selected interpreter information candidate. [0035] Yet another aspect of the present invention provides a moving image decoding device. The device is a moving image decoding device that decodes a bit stream obtained by encoding moving images using motion prediction compensated into units of blocks obtained by dividing each image into the moving images, including: a unit of motion decoding of forecast information (202) which decodes information indicating a previously designated amount of information from interpretation candidates; a forecast information derivation unit (205) that produces interpreter forecast information from forecast forecast information from a forecast block next to a decoding target forecast block or a forecast block present in the same position or near the decode target preview block in an image decoded in a position temporarily different from the decode target preview block; a candidate list building unit (230) that builds a candidate list of interpreter information from derived interpreter candidate information; an additional candidate unit (235) that produces information from interprevision candidates whose forecast mode, Petition 870190021217, of 03/01/2019, p. 29/536 26/150 reference and motion vector have predetermined values when the amount of interpreter candidate information included in the constructed interpreter candidate information list is less than the previously designated amount of interpreter candidate information and add the information from interpreter candidates Interprevision derived from the constructed list of Interpretation Candidate List, produce one or more information from Interprevision Candidates whose forecast mode and motion vector have the same value and the benchmark is changed from that Interpreter Candidate Information that has the predetermined value when the amount of interim candidate information included in the added interim information candidate list is less than the previously designated amount of interim candidate information and also adds information Interpretation candidate information derived from the added Interpretation Candidate List, and produce Interpretation Candidate Information whose prediction mode, benchmark and motion vector have predetermined values up to the amount of Interpretation Candidate information included in the list added Interpretation Candidate Information also reaches the previously designated amount of Interpretation Candidate Information when the amount of Interpretation Candidate Information included in the Interpreter Information Candidate List added is also less than the previously designated amount of Candidate Information interpreting information and also adding the derived interpreting candidate information to the added interpreting information candidate list as well; and a compensated movement forecasting unit (206) that selects a candidate for interpreter information from candidate information Petition 870190021217, of 03/01/2019, p. 5/3036 27/150 of interpretation and perform interpretation in the decoding target forecast block using the selected interpretation information candidate. [0036] Yet another aspect of the present invention provides a moving image decoding device. The device is a moving image decoding device that decodes a bit stream obtained by encoding moving images using motion prediction compensated into units of blocks obtained by dividing each image into the moving images, including: a unit of motion decoding of forecast information (202) which decodes a designated amount of information from interpretation candidates; a forecast information derivation unit (205) that stores and initializes a designated amount of interpreter candidate information that has predetermined prediction mode, benchmark, and motion vector in a candidate interpreter information list on the which designated amount of interpreter candidate information is stored and then produces interpreter candidate information from interpreter information from a forecast block next to a decoding target forecast block or to a present forecast block in the same position or close to the decode target forecast block in an image decoded in a position temporarily different from the decode target forecast block; and a compensated motion prediction unit (206) that selects an interpreter information candidate from the interpreter candidate information included in the interpreter information candidate list and performs interpreter on the decoding target forecast block using the candidate of selected interpretation information. Petition 870190021217, of 03/01/2019, p. 5/3136 28/150 [0037] Yet another aspect of the present invention provides a moving image decoding device. The device is a moving image encoding device that encodes moving images using motion prediction compensated into units of blocks obtained by dividing each image into the moving images, including: a prediction information decoding unit (202) that decodes information indicating a previously designated amount of information from interpreter candidates; a forecast information derivation unit (205) that produces interpreter forecast information based on the amount of candidates designated as the interpreter forecast information amount from interpreter forecast information from a forecast block next to a forecast block decoding target forecast or to a forecast block present in the same position or close to the coding target forecast block in a decoded image in a position temporarily different from the decoding target forecast block; a candidate list building unit (230) that builds a candidate list of interpreter information from derived interpreter candidate information; and a compensated movement forecasting unit (206) that selects an interpreter information candidate from interpreter candidate information included in the interpreter information candidate list when the previously designated number of candidates is greater than or equal to 1, and perform interpretation on the decoding target forecast block using the selected interpretation information candidate and perform interpretation on the decoding target forecast block using interpretation information that has a predetermined value when the previously designated number of candidates is 0. Petition 870190021217, of 03/01/2019, p. 32/536 [0038] Yet another aspect of the present invention provides a moving image decoding device. The device is a moving image decoding device that decodes a bit stream obtained by encoding moving images using motion prediction compensated into units of blocks obtained by dividing each image into the moving images, including: a unit of motion derivation of forecast information (205) which produces information from interpreter candidates from interpreter information from a forecast block next to a decoding target forecast block or to a forecast block present in the same position or next to the block decoding target prediction on an image decoded in a position temporarily different from the decoding target prediction block; a supplementary candidate unit (235) that supplements information from interpreter candidates that have the same values of forecast mode, benchmark and movement vector until the amount of interpreter candidate information reaches the designated number of candidates when the quantity Interprisation candidate information is less than the designated number of candidates; and a compensated motion prediction unit (206) that selects an interpreter information candidate from the interpreter candidate information and performs interpreter on the decoding target forecast block using the selected interpreter information candidate. [0039] Yet another aspect of the present invention provides a method of decoding the moving image. The method is a moving image decoding method to decode a bit stream obtained by decoding moving images using motion prediction compensated in units Petition 870190021217, of 03/01/2019, p. 33/536 30/150 blocks obtained by dividing each image into the moving images, including: a step of decoding forecast information from decoded information indicating a previously designated amount of information from interpretation candidates; a step of producing forecast information to produce the interpreter forecast information from the forecast information from a forecast block next to a decoding target forecast block or to a forecast block present in the same position or next to the decode target forecast block in an image decoded in a position temporarily different from the decode target forecast block; a candidate list building step to build a candidate list of interpreter information from derived interpreter candidate information; a supplementary candidate step to produce information from interpreter candidates whose forecast mode, benchmark and motion vector have predetermined values until the amount of interpreter forecast information included in the interpreter forecast candidate list reaches the previously designated amount Interpretation Candidate Information when the amount of Interpretation Candidate Information included in the constructed Interpretation Candidate List is less than the previously designated Interpretation Candidate Information and adds the derived Interpretation Candidate information to the candidate of information of constructed interpretation; and a compensated motion prediction step to select an interpreter information candidate from the interpreter candidate information and perform interpreter on the decoding target forecast block using the selected interpreter information candidate. Petition 870190021217, of 03/01/2019, p. 34/536 31/150 [0040] Yet another aspect of the present invention provides a method of decoding the moving image. The method is a method of decoding the moving image to decode a bit stream obtained by encoding moving images using motion prediction compensated in units of blocks obtained by dividing each image among the moving images, including: a step of decoding the amount of forecast information from decoded information indicating a previously designated amount of information from interpretation candidates; a step of producing forecast information to produce the interpreter forecast information from the forecast information from a forecast block next to a decoding target forecast block or to a forecast block present in the same position or next to the decode target forecast block in an image decoded in a position temporarily different from the decode target forecast block; a candidate list building step to build a candidate list of interpreter information from derived interpreter candidate information; a candidate add step to produce interpreter forecast information whose prediction mode, benchmark and movement vector have predetermined values when the amount of interpreter forecast information included in the constructed interpreter information candidate list is less than the previously designated amount of interpreter candidate information and add the derived interpreter candidate information to the constructed interpreter candidate information list, and produce one or more interpreter candidate information in which at least one of the prediction mode, the reference index and the motion vector is changed from these Petition 870190021217, of 03/01/2019, p. 35/536 32/150 Interpretation candidate information that has a predetermined value when the amount of Interpretation Candidate Information included in the Interpretation Candidate Information Added is less than the previously designated Interpretation Candidate Information and also adds the information Interpretation Candidates Derived from the Interpretation Information Candidate List Added; a supplementary candidate step to produce information from interpreter candidates whose forecast mode, benchmark and movement vector have predetermined values until the amount of interpreter candidate information included in the interpreter information candidate list added also reaches the amount previously designated Interpretation Candidate Information when the amount of Interpretation Candidate Information included in the Interpretation Candidate Information List is also less than the previously designated Interpretation Candidate Information and also adds Interpretation Candidate Information derived from the candidate list of interpretation information added as well; and a compensated motion prediction step to select an interpreter information candidate from the interpreter candidate information and perform interpreter on the decoding target forecast block using the selected interpreter information candidate. [0041] Yet another aspect of the present invention provides a method of decoding the moving image. The method is a method of decoding the moving image to decode a bit stream obtained by encoding moving images using motion prediction compensated in units of blocks obtained by dividing each image among the images in Petition 870190021217, of 03/01/2019, p. 36/536 33/150 movement, including: a step of decoding the amount of information from forecast information decoded indicating a previously designated amount of information from interpretation candidates; a step of producing forecast information to produce the interpreter forecast information from the forecast information from a forecast block next to a decoding target forecast block or to a forecast block present in the same position or next to the decode target forecast block in an image decoded in a position temporarily different from the decode target forecast block; a candidate list building step to build a candidate list of interpreter information from derived interpreter candidate information; a candidate add step to produce interpreter forecast information whose prediction mode, benchmark and movement vector have predetermined values when the amount of interpreter forecast information included in the constructed interpreter information candidate list is less than the previously designated amount of interpreter candidate information and adding the derived interpreter candidate information to the constructed interpreter candidate information list, producing one or more interpreter interview information whose forecast mode and motion vector have the same value and the benchmark is changed from that information from interpreter candidates who have the default value when the amount of interpreter candidate information included in the added interpreter information candidate list is less than the previously designated amount of interpreter candidate information and also add the derived interpreter candidate information to the list Petition 870190021217, of 03/01/2019, p. 37/536 34/150 Interpretation Information Candidate Added, and produce Interpretation Candidate Information whose forecast mode, benchmark and motion vector have predetermined values up to the amount of Interpretation Candidate Information included in the Candidate Information List. Added Interprevision also reaches the previously designated amount of Interpretation Candidate Information when the amount of Interpretation Candidate Information included in the Interpretation Candidate Information Added is also less than the previously designated Interpretation Candidate Information and also adds Interpretation candidate information derived from the Interpretation information candidate list added as well; and a compensated motion prediction step to select an interpreter information candidate from the interpreter candidate information and perform interpreter on the decoding target forecast block using the selected interpreter information candidate. [0042] Yet another aspect of the present invention provides a method of decoding the moving image. The method is a method of decoding the moving image to decode a bit stream obtained by encoding moving images using motion prediction compensated in units of blocks obtained by dividing each image among the moving images, including: a step of decoding the amount of forecast information to decode a designated amount of information from interpretation candidates; a step of producing forecasting information to store and initialize a designated amount of information from interprevision candidates that has forecasting mode, benchmark and motion vector Petition 870190021217, of 03/01/2019, p. 38/536 35/150 predetermined in advance in a candidate list of interpreter information in which the designated amount of interpreter candidate information is stored and then produce interpreter candidate information from interpreter information from a forecast block near a decode target forecast block or a forecast block present in the same position or next to the decode target forecast block in an image decoded in a position temporarily different from the decode target forecast block; and a compensated motion prediction step to select an interpreter information candidate from the interpreter candidate information included in the interpreter information candidate list and perform interpreter on the decoding target forecast block using the interpreter information candidate. selected interprevision. [0043] Yet another aspect of the present invention provides a method of decoding the moving image. The method is a moving image encoding method for encoding moving images using motion prediction compensated into units of blocks obtained by dividing each image into the moving images, including: a decoding step of information for forecasting decoded information indicating a previously designated amount of information from interpreter candidates; a step of producing forecast information to produce interpreter forecast information based on the amount of candidates designated as the interpreter forecast information amount from interpreter forecast information from a forecast block next to a target forecast block decoding code or to a forecast block present in the same position or close to the target forecast block Petition 870190021217, of 03/01/2019, p. 39/536 36/150 decoding in an image decoded in a position temporarily different from the decoding target prediction block; a candidate list building step to build a candidate list of interpreter information from derived interpreter candidate information; and a compensated movement forecast step to select an interpreter information candidate from interpreter candidate information included in the interpreter information candidate list when the previously designated number of candidates is greater than or equal to 1 and perform interpreter in the decode target forecast block using the selected interpreter information candidate and perform interpreter on the decode target forecast block using interpreter information that has a predetermined value when the previously designated number of candidates is 0. [0044] Another aspect of the present invention provides a receiver. The receiver is a receiver that receives and decodes a bit stream obtained by encoding moving images, including: a receiving unit that receives a bit stream obtained by packetizing a bit stream obtained by encoding moving images using compensated motion prediction in units of blocks obtained by dividing each image into the moving images; a reconstruction unit that carries the received bit stream in packets to reconstruct an original bit stream; a forecast information decoding unit (202) which decodes information indicating a previously designated amount of interpreter candidate information from the reconstructed bit stream; a forecast information derivation unit (205) which produces the information for interpreter candidates from Petition 870190021217, of 03/01/2019, p. 40/536 37/150 interpretation of a forecast block next to a decoding target forecast block or a forecast block present in the same position or next to the decoding target forecast block in an image decoded in a position temporarily different from the block decoding target prediction; a candidate list building unit (230) that builds a candidate list of interpreter information from derived interpreter candidate information; a candidate addition unit (234) that produces information from interpreter candidates whose prediction mode, benchmark and motion vector have predetermined values when the amount of interpreter candidate information included in the constructed interpreter information candidate list is less than the previously designated amount of interpreter candidate information and adds the derived interpreter candidate information to the constructed interpreter candidate list and produces one or more interpreter candidate information in which at least one of the prediction, benchmark, and motion vector are changed from this interpreter candidate information that has the predetermined value when the amount of interpreter candidate information included in the candidate list of additional interpreter information is less than the previously designated amount of interpreter candidate information and also add the derived interpreter candidate information to the added interpreter information candidate list; a supplementary candidate unit (235) that produces information from interpreter candidates whose forecast mode, benchmark and movement vector have predetermined values up to the amount of interpreter candidate information included in the candidate list of information from Petition 870190021217, of 03/01/2019, p. 41/536 38/150 added Interpretation also reaches the previously designated amount of Interpretation Candidate Information when the amount of Interpretation Candidate Information included in the Interpretation Candidate Information Added is also less than the previously designated Interpretation Candidate Information and also add the derived interpreter candidate information to the added interpreter information candidate list as well; and a compensated motion prediction unit (206) that selects an interpreter information candidate from the interpreter candidate information and performs interpreter on the decoding target forecast block using the selected interpreter information candidate. [0045] Another aspect of the present invention provides a method of reception. The method is a reception method for receiving and decoding a bit stream obtained by encoding moving images, including: a step of receiving a bit stream obtained by packetizing a bit stream obtained by encoding images in motion movement using compensated movement prediction in units of blocks obtained by dividing each image among the moving images; a reconstruction step to package the received bit stream to reconstruct an original bit stream; a step of decoding prediction information from decoded information indicating a previously designated amount of information from interpretation candidates from the reconstructed bit stream; a step of producing forecast information to produce the interpreter forecast information from the forecast information from a forecast block next to a decoding target forecast block or to a forecast block present in the same position or next to the block Petition 870190021217, of 03/01/2019, p. 42/536 39/150 decode target prediction in an image decoded in a position temporarily different from the decode target prediction block; a candidate list building step to build a candidate list of interpreter information from derived interpreter candidate information; a candidate add step to produce interpreter forecast information whose prediction mode, benchmark and movement vector have predetermined values when the amount of interpreter forecast information included in the constructed interpreter information candidate list is less than the previously designated amount of interpreter candidate information and add the derived interpreter candidate information to the constructed interpreter candidate information list, and produce one or more interpreter candidate information in which at least one of the prediction mode, the benchmark and the movement vector is changed from this interpreter candidate information that has the default value when the amount of interpreter candidate information included in the interpreter candidate list adds ada is less than the previously designated amount of interpreter candidate information and also adds the derived interpreter candidate information to the added interpreter interview candidate list; a supplementary candidate step to produce information from interpreter candidates whose forecast mode, benchmark and movement vector have predetermined values until the amount of interpreter candidate information included in the interpreter information candidate list added also reaches the amount previously designated interim candidate information when the amount of Petition 870190021217, of 03/01/2019, p. 43/536 40/150 interprevision included in the candidate list of added interpreter information is also less than the previously designated amount of interpreter candidate information and also adds the derived interpreter candidate information to the added interpreter information candidate list as well; and a compensated motion prediction step to select an interpreter information candidate from the interpreter candidate information and perform interpreter on the decoding target forecast block using the selected interpreter information candidate. [0046] An optional combination of the constituent components described above and a modality obtained by exchanging expressions of the present invention among methods, devices, systems, recording media, computer programs and the like are also effective as aspects of the present invention. [0047] According to the present invention, it is possible to reduce that an amount of coding occurrence of coding information is transmitted to improve the coding efficiency by producing candidates for forecasting information used in compensated motion prediction according to a situation. BRIEF DESCRIPTION OF THE DRAWINGS [0048] FIG. 1 is a block diagram illustrating a configuration of a motion picture encoding device that performs a motion vector prediction method, according to one embodiment; [0049] FIG. 2 is a block diagram illustrating a configuration of a motion picture decoding device that performs a motion vector prediction method, according to a modality; [0050] FIG. 3 is a diagram to describe block three and one Petition 870190021217, of 03/01/2019, p. 44/536 41/150 coding block; [0051] FIGS. 4A to 4H are diagrams to describe a way of dividing a forecast block; [0052] FIG. 5 is a diagram for describing a prediction block for a spatial merge candidate in a merge mode; [0053] FIG. 6 is a diagram for describing a prediction block for a spatial merge candidate in a merge mode; [0054] FIG. 7 is a diagram for describing a prediction block for a spatial merge candidate in a merge mode; [0055] FIG. 8 is a diagram for describing a prediction block for a spatial merge candidate in a merge mode; [0056] FIG. 9 is a diagram for describing a prediction block for a time merge candidate in a merge mode; [0057] FIG. 10 is a diagram for describing the syntax of a bit stream in the respective forecast blocks in a blending mode; [0058] FIG. 11 is a diagram for describing an example of an entropy symbol for a merge index syntax element; [0059] FIG. 12 is a block diagram illustrating a detailed configuration of an interpreting information derivation unit of a moving image encoding device illustrated in FIG. 1, according to a first practical example; [0060] FIG. 13 is a block diagram illustrating a detailed configuration of an interpreting information derivation unit of a moving image decoding device illustrated in FIG. 2, according to a first practical example; [0061] FIGS. 14A to 14H are diagrams to describe a block Petition 870190021217, of 03/01/2019, p. 45/536 42/150 forecast next to a processing target forecast block in a blending mode; [0062] FIG. 15 is a flow chart for describing the flow of a process for producing merge candidates in a merge mode and a process for building a merge candidate list, according to the first practical example; [0063] FIG. 16 is a flow chart for describing the flow of a process for producing spatial merge candidates in a merge mode; [0064] FIGS. 17A to 17H are diagrams to describe a neighboring block cited in a process to produce a time merge candidate reference index; [0065] FIG. 18 is a flow chart for describing the flow of a process for producing reference signal from temporal merge candidates in a merge mode; [0066] FIG. 19 is a flow chart to describe the flow of a process for producing temporal merge candidates in a merge mode; [0067] FIG. 20 is a flowchart for describing the flow of a process for producing images of a different time than a blending mode; [0068] FIG. 21 is a flowchart for describing the flow of a process for producing image preview blocks at a time different from a blending mode; [0069] FIG. 22 is a flow chart for describing the flow of a process for producing temporal merge candidates in a merge mode; [0070] FIG. 23 is a flow chart to describe the flow of a process for producing temporal merge candidates in a merge mode; Petition 870190021217, of 03/01/2019, p. 46/536 43/150 [0071] FIG. 24 is a flow chart to describe the flow of a motion vector scaling process; [0072] FIG. 25 is a flow chart to describe the flow of a motion vector scaling process; [0073] FIG. 26 is a flow chart for describing the flow of a process for producing additional merge candidates in a merge mode; [0074] FIG. 27 is a flowchart to describe the flow of a candidate merge limitation process; [0075] FIG. 28 is a block diagram illustrating a detailed configuration of an interpreting information derivation unit of a moving image encoding device illustrated in FIG. 2, according to the second to the seventh practical example; [0076] FIG. 29 is a block diagram illustrating a detailed configuration of an interpreting information derivation unit of a moving image decoding device illustrated in FIG. 2, according to the second to the seventh practical example; [0077] FIG. 30 is a flow chart for describing the flow of a process for producing merge candidates in a merge mode and a process for building a merge candidate list according to the second to seventh practical example; [0078] FIG. 31 is a flow chart for describing the flow of a process to supplement valid merge candidates in a merge mode, according to the second practical example; [0079] FIG. 32 is a flow chart to describe the flow of a process to supplement valid merge candidates in a merge mode, according to the third practical example; [0080] FIG. 33 is a flow chart to describe the flow of a Petition 870190021217, of 03/01/2019, p. 47/536 44/150 process for supplementing valid merge candidates in a merge mode according to the fourth practical example; [0081] FIG. 34 is a flow chart for describing the flow of a process for producing additional merge candidates in a merge mode and a process for supplementing valid merge candidates, according to the fifth practical example; [0082] FIG. 35 is a flowchart for describing the flow of a process for producing interpreter information initialized in a merge mode valid as a merge candidate, according to the sixth and seventh practical examples; [0083] FIG. 36 is a diagram for describing a direct temporal mode of the conventional MPEG-4 AVC / H.264 standard; [0084] FIG. 37 is a flow chart for describing the process flow of an interpreter information selection unit from an interpreter information derivation unit of a moving image encoding device; and [0085] FIG. 38 is a flowchart for describing the process flow of an interpreter information selection unit from an interpreter information derivation unit of a moving image decoding device. BEST MODE FOR CARRYING OUT THE INVENTION [0086] The present modality relates to a technique of encoding moving image and, in particular, to a technique of encoding moving image for dividing an image into rectangular blocks that have sizes and optional formats and to perform motion compensation between images in block units. In this technique, a plurality of motion vector predictors is derived from a motion vector of a block next to a coding target block or a block of an encoded image in order to improve the coding efficiency and a Petition 870190021217, of 03/01/2019, p. 48/536 45/150 vector difference between a motion vector of the encoding target block, and a selected motion vector predictor is produced and encoded to reduce the amount of encoding. Alternatively, the coding information of the coding target block is produced using coding information from a block next to the coding target block or to a block of an encoded image to reduce an amount of coding. In addition, if motion images are decoded, a plurality of motion vector predictors is derived from a motion vector of a block next to a decoding target block or a decoded image block, and a motion vector. The decoding target block is produced and decoded from a vector difference decoded from a bit stream and a selected motion vector predictor. Alternatively, the encoding information of the decoding target block is produced using encoding information from a block next to the decoding target block or to a block of a decoded image. [0087] First, the techniques and terms used in this modality are defined. BLOCK THREE AND CODING BLOCK [0088] In the modalities, one or more slices obtained by dividing an image are a basic unit for coding and a type of slice that is information indicating the type of slice that is adjusted for each slice. As illustrated in FIG. 3, a slice is evenly divided into the respective square units that have the same optional size. This square unit is defined as a three block and is used as a basic address management unit to specify an encoding / decoding target block in a slice (a Petition 870190021217, of 03/01/2019, p. 49/536 46/150 encoding in encoding processes and a decoding target block in decoding processes; the same applies to the description below, unless otherwise stated below). A block three that excludes monochrome components includes a light signal and two chromium signals. The size of a block three can be freely adjusted for power 2, according to an image size or the texture in an image. Block three can be divided into blocks that have a small block size by hierarchically dividing the light signal and chromium signals in block three into four parts (two parts in the vertical and horizontal directions) as needed, according to the texture in the image , so that an encoding process can be optimized. This block is defined as a coding block and is a basic unit for encoding and decoding processes. A coding block that excludes monochromatic components includes a light signal and two chromium signals. A larger size of an encoding block is the same size as a three block. A coding block that has the smallest coding block size is called a smaller coding block and can be freely adjusted to power 2. [0089] In FIG. 3, a coding block A is a coding block obtained without dividing a block three. A coding block B is a coding block obtained by dividing a three block into four parts. A C coding block is a coding block obtained by dividing the block into four parts by dividing the block three into four parts. A D coding block is a coding block obtained by double dividing into four parts of the block obtained by dividing block three into four parts and is a coding block that has the smallest size. FORECASTING MODE [0090] The intra-forecasting (MODE_INTRA), in which forecasting is Petition 870190021217, of 03/01/2019, p. 50/536 47/150 performed from neighboring image signals in an encoded / decoded state (used in images, forecast blocks, image signals and the like obtained by decoding encoded signals in the encoding process and decoded images, forecast blocks, signals images and the like in the decoding process; the same applies to the description below, unless otherwise stated below) in an encoding / decoding and interprevision target image (MODE_INTER) where the forecast is made from image signals from encoded / decoded images are switched in the respective encoding blocks. A way to identify the intra-forecast (MODE_INTRA) and the inter-forecast (MODE_INTER) is defined as a forecast mode (PredMode). The forecast mode (PredMode) has an intra-forecast (MODE_INTRA) or interprevision (MODE_INTER) value and can be selected and coded. DIVISION MODE, FORECAST BLOCK AND FORECASTING UNIT [0091] When an image is divided into blocks to perform intra-forecast (MODE_INTRA) and interprevision (MODE_INTER), a coding block is divided as needed to make prediction in order to further reduce the intra-forecasting and switching interprevision units. One way to identify a method for splitting the light signal and the chromium signals from an encoding block is defined as a splitting mode (PartMode). In addition, the divided block is defined as a forecast block. As illustrated in FIGS. 4A to 4H, eight division modes (PartMode) are defined, depending on a method for dividing the light signal of a coding block. [0092] A division mode (PartMode) in which the light signal of a coding block illustrated in FIG. 4A is not divided, but it is Petition 870190021217, of 03/01/2019, p. 51/536 48/150 considered as a forecast block, it is defined as division 2Nx2N (PART_2Nx2N). The division modes (PartMode) in which the light signals to encode blocks illustrated in FIGS. 4B, 4C and 4D are divided into two forecast blocks arranged in the vertical direction are defined as division 2NxN (PART_2NxN), division 2NxnU (PART_2NxnU), and division 2NxnD (PART_2NxnD), respectively. Here, division 2NxN (PART_2NxN) is a division mode in which the light signal is divided in the 1: 1 ratio in the vertical direction, division 2NxnU (PART_2NxnU) is a division mode in which the light signal is divided in the 1: 3 ratio in the vertical direction and division 2NxnD (PART_2NxnD) is a division mode in which the light signal is divided in a 3: 1 ratio in the vertical direction. The division modes (PartMode) in which the light signals to encode blocks illustrated in FIGS. 4E, 4F, and 4G are divided into two forecast blocks arranged in the horizontal direction are defined as Nx2N division (PART_Nx2N), nLx2N division (PART_nLx2N), and nRx2N division (PART_nRx2N), respectively. Here, the Nx2N division (PART_Nx2N) is a division mode in which the light signal is divided in the 1: 1 ratio in the horizontal direction, nLx2N division (PART_nLx2N) is a division mode in which the light signal is divided in the 1 ratio: 3 in the horizontal direction, and nRx2N division (PART_nRx2N) is a division mode in which the light signal is divided in the 3: 1 ratio in the horizontal direction. A split mode (PartMode) in which the light from a coding block illustrated in FIG. 4H is divided into four parts in the vertical and horizontal directions to obtain four forecast blocks is defined as division NxN (PART_NxN). [0093] The chromium signal is divided in the same vertical and horizontal division ratios as the light signal in the respective division modes (PartMode). [0094] In order to specify each forecast block in a block of Petition 870190021217, of 03/01/2019, p. 52/536 49/150 coding, a number starting from 0 is allocated in the forecast blocks present in the coding block in the coding order. This number is defined as a PartIdx split index. The number described in each prediction block of the coding block illustrated in FIGS. 4A to 4H indicates a PartIdx split index of the forecast block. In division 2NxN (PART_2NxN), in division 2NxnU (PART_2NxnU) and in division 2NxnD (PART_2NxnD) illustrated in FIGS. 4B, 4C, and 4D, the PartIdx split signals from the upper forecast blocks are set to 0 and the PartIdx split signal from the lower forecast blocks are set to 1. In the Nx2N division (PART_Nx2N), the nLx2N division (PART_nLx2N) and nRx2N division (PART_nRx2N) illustrated in FIGS. 4E, 4F, and 4G, the PartIdx split signal from the left forecast blocks is set to 0 and the PartIdx split signal from the right forecast blocks are set to 1. In the NxN division (PART_NxN) illustrated in FIG. 4H, the PartIdx split index of the top left forecast block is set to 0, the PartIdx split index of the top right forecast block is set to 1, the PartIdx split index of the bottom left forecast block is adjusted to 2, and the PartIdx split index of the bottom right forecast block is set to 3. [0095] When the prediction mode (PredMode) is interprevision (MODE_INTER), the division 2Nx2N (PART_2Nx2N), the division 2NxN (PART_2NxN), the division 2NxnU (PART_2NxnU), the division 2NxnD (PART_2NxnD), the division NN2 PART_Nx2N), the nLx2N division (PART_nLx2N) and the nRx2N division (PART_nRx2N) are defined as the division modes (PartMode), just like for the D coding block, except that it is the smallest coding block, the NxN division (PART_NxN ) can be defined as the division mode (PartMode) in addition to division 2Nx2N (PART_2Nx2N), division 2NxN (PART_2NxN), division 2NxnU (PART_2NxnU), division 2NxnD Petition 870190021217, of 03/01/2019, p. 53/536 50/150 (PART_2NxnD), the Nx2N division (PART_Nx2N), the nLx2N division (PART_nLx2N) and the nRx2N division (PART_nRx2N). However, in this mode, the NxN division (PART_NxN) is not defined as the division mode (PartMode). [0096] When the forecast mode (PredMode) is intra-forecast (MODE_INTRA), division 2Nx2N (PART_2Nx2N) is defined only for coding blocks other than coding block D which is the smallest coding block as the division mode (PartMode ), and the division NxN (PART_NxN) is defined for the coding block D, except that it is the smallest coding block as the division mode (PartMode) in addition to the division 2Nx2N (PART_2Nx2N). The reason that the NxN division (PART_NxN) is not defined for coding blocks other than the smallest coding block is due to the fact that it is possible to divide a different coding block from the smaller coding block into four parts to express smaller coding blocks . BLOCK POSITIONS THREE, CODE BLOCK, FORECAST BLOCK, TRANSFORMATION BLOCK [0097] The block positions including a three block, a coding block, a forecast block and a transformation block of the present modality, are represented, of so that the position of a pixel of a left top light signal of a signal light screen is adjusted as the origin (0, 0) and the position of a pixel of the left top light signal included in the region of each block is represented by a two-dimensional coordinate (x, y). The direction of the coordinate geometric axis is defined, so that the right direction of the horizontal direction and the down direction of the vertical direction are the positive directions and the unit is a pixel of the light signal. When the chromium format is 4: 2: 0 or 4: 2: 2, in which the image size (the number of pixels) is different from the light signal for the Petition 870190021217, of 03/01/2019, p. 54/536 51/150 chromium signal, as well as when the chromium format is 4: 4: 4, in which the image size (the number of pixels) is the same for the light signal and for the chromium signal, the position of each chromium signal block is represented by the coordinate of a pixel of a light signal included in the region of the block and the unit is a pixel of the light signal. In doing so, you can specify the position of each chromium signal block and clearly understand the position relationship between the light signal block and the chromium signal block just by comparing the coordinate values. INTERPREVISION MODE AND REFERENCE LIST [0098] In the mode of the present invention, a plurality of decoded images can be used as reference images in the interprevision, in which the prediction is performed from encoded / decoded image image signals. In order to specify reference images selected from a plurality of reference images, a reference index is allocated to each forecast block. In slices B, it is possible to select two optional reference images for each forecast block and perform interpretation and examples of the interpretation mode include L0 forecast (Pred_L0), L1 forecast (Pred_L1) and double forecast (Pred_BI). Reference images are managed by L0 (Reference list 0) and L1 (Reference list 1) of a list structure and a reference image can be specified by designating the reference index of L0 or L1. The L0 forecast (Pred_L0) is interprevision that refers to reference images managed by L0 forecast, L1 forecast (Pred_L1) is interprevision that refers to reference images managed by L1 and double forecast (Pred_BI) is an interpretation in which both the forecast L0 as the L1 forecast are performed and refer to reference images managed by L0 and L1, respectively. The L0 forecast can be used only in P slice slices and the L0 forecast, the Petition 870190021217, of 03/01/2019, p. 55/536 52/150 L1 forecast and double forecast (Pred_BI), in which the L0 forecast and the L1 forecast that are average or have added weight can be used in slice B interpretation. In the following processes, it is assumed that integers and variables to which an LX character is attached to the output are processed for each of the L0 and L1 forecasts. MIXING MODE AND MIXING CANDIDATE [0099] A blending mode is a mode in which, instead of interpreting information such as a forecasting mode, a reference index or a motion vector of a coding target forecasting block / decoding is encoded and decoded, interpreting is performed by producing interpreting information from an encoding / decoding target forecast block from interpreting forecast block information near the encoding / decoding target forecast block in same image as the encoding / decoding target forecast block or a forecast block present in the same position or close to (in the neighboring position) an encoding / decoding target forecast block of an encoded / decoded image in a different time position of the encoding / decoding target forecast block. A forecast block next to the coding / decoding target forecast block in the same image as the coding / decoding target forecast block and the forecast block interpreting information is termed as spatial merge candidates and a forecast block present in the same or next position (the neighboring position) the encoding / decoding target forecast block of an encoded / decoded image in a position temporarily different from the encoding / decoding target forecast block and the interpretation information derived from the information of Interprevision of the forecast block are called temporal merge candidates. The Petition 870190021217, of 03/01/2019, p. 56/536 53/150 respective merge candidates are added to a merge candidate list, and a merge candidate used for interprevision is specified by a merge index. NEIGHBORHOOD FORECAST BLOCK [00100] FIGS. 5, 6, 7, and 8 are diagrams for describing forecast blocks next to an encoding / decoding target forecast block in the same image as the encoding / decoding target forecast block that is cited when producing spatial merge candidates and reference signal of temporal merge candidates. FIG. 9 is a diagram for describing encoded / decoded forecast blocks present in the same position or close to an encoding / decoding target forecast block of an encoded / decoded image in a temporarily different position from the encoding / decoding target forecast block which is cited when producing temporal merge candidates. Neighboring forecast blocks in a spatial direction of an encoding / decoding target forecast block and forecast blocks in the same different time position will be described using FIGS. 5, 6, 7, 8, and 9. [00101] As illustrated in FIG. 5, a forecast block A near the left side of an encoding / decoding target forecast block in the same image as the coding / decoding target forecast block, a forecast block B near an upper side of the target encoding / decoding target, a forecast block C near a top right corner of the target coding / decoding target block, a forecast block D near a bottom left corner of the target forecast block of encoding / decoding, and a prediction block E near a top left corner of the target prediction block of Petition 870190021217, of 03/01/2019, p. 57/536 54/150 encoding / decoding are defined as neighboring forecast blocks in the spatial direction. [00102] As illustrated in FIG. 6, when a plurality of forecast blocks near the left side of the encoding / decoding target forecast block are present and smaller in size than the coding / decoding target forecast block, only the lowest forecast block A10 among the forecast blocks next to the left side it will be called as forecast block A next to the left side in the present modality. [00103] Similarly, when a plurality of forecast blocks near the upper side of the encoding / decoding target forecast block is present and smaller in size than the encoding / decoding target forecast block, only the block leftmost forecast block B10 among the forecast blocks next to the left side will be referred to as the forecast block B next to the upper side in this modality. [00104] As illustrated in FIG. 7, even when the size of the prediction block F near the left side of the encoding / decoding target forecast block is larger than the encoding / decoding target forecast block, according to the conditions, the prediction block F is forecast block A if forecast block F next to the left side next to the left side of the coding / decoding target forecast block, forecast block D if forecast block F near the bottom left corner of the block encoding / decoding target forecast block and the forecast block E if the forecast block F near the top left corner of the coding / decoding target forecast block. In the example of FIG. 7, forecast block A, forecast block D and forecast block E are the same forecast blocks. [00105] As illustrated in FIG. 8, even when the size of the Petition 870190021217, of 03/01/2019, p. 58/536 55/150 forecast block G near the upper side of the target coding / decoding target block is larger than the target coding / decoding target block, according to the conditions, the forecast block G is the forecast B if forecast block G near the top side near the top side of the encoding / decoding target forecast block, forecast block C if forecast block G near the top right corner of the target forecast block encoding / decoding block and the forecast block E if the forecast block G near the top left corner of the encoding / decoding target forecast block. In the example of FIG. 8, forecast block B, forecast block C and forecast block E are the same forecast blocks. [00106] As illustrated in FIG. 9, in the encoded / decoded images in positions temporarily different from the coding / decoding target forecast block, the coded / decoded forecast blocks T0 and T1 present in the same position or close to the coding / decoding target forecast block are defined as forecast blocks in the same position at a different time. POC [00107] A POC is a variable associated with an image that must be encoded and a value that is increased by 1 in the order of image emission / display is adjusted as the POC. Based on the POC value, it is possible to determine whether two images are the same image, determine an anteroposterior relationship between images in the order of emission / display and produce an image-to-image distance. For example, when two images have the same POC value, it can be determined that both images are the same image. When two images have different POC values, it can be determined that an image that has the lowest POC value is Petition 870190021217, of 03/01/2019, p. 59/536 56/150 an image that is emitted and displayed earlier, and a difference between POCs of two images indicates an image-by-image distance in a geometric time axis direction. [00108] Hereinafter, an embodiment of the present invention will be described with reference to the drawings. FIG. 1 is a block diagram illustrating a configuration of a moving image encoding device, in accordance with an embodiment of the present invention. The moving image encoding device of the mode includes an image memory 101, a header information definition unit 117, a motion vector detector 102, a motion vector difference derivation unit 103, a motion vector derivation of inter-forecast information 104, a compensated motion forecast unit 105, an intra-forecast unit 106, a forecast method determination unit 107, a residual signal construction unit 108, an orthogonal transformation and quantization unit 109, a first bit stream construction unit 118, a second bit stream construction unit 110, a third bit stream construction unit 111, a multiplexer 112, a decantation and reverse-orthogonal transformation unit 113, a unit overlay signal decoding 114, a memory for storing encoding information 115 and the image memory 116 decoded. [00109] The header information definition unit 117 establishes information in sequence, image, and slide units. The information in sequence, image and established slice units is provided for the interpretation information derivation unit 104 and the first bit stream construction unit 118 and is also provided for all blocks although not shown in the drawing. The header information definition unit 117 Petition 870190021217, of 03/01/2019, p. 60/536 57/150 also establishes a greater number of maxNumMergeCand merge candidates described later. [00110] Image memory 101 temporarily stores image signals to encode target images provided in the order of time in which images are captured and displayed. Image memory 101 provides stored image signals to encode target images for motion vector detector 102, prediction method determination unit 107 and residual signal construction unit 108 in predetermined pixel block units . In this case, the image signals from the images stored in the capture / display order are rearranged in the encoding order and are output from the image memory 101 in units of pixel blocks. [00111] The motion vector detector 102 detects a motion vector of each size of the forecast block and each forecast mode in the respective forecast blocks by performing block adaptation between the image signals provided from the image memory 101 and the reference images provided from the decoded image memory 116 and provides the detected motion vector for the compensated motion prediction unit 105, the motion vector difference derivation unit 103 and the method determination unit forecasting 107. [00112] The motion vector difference derivation unit 103 produces a plurality of motion vector predictor candidates using encoding information from encoded image signals stored in the encoding information storage memory 115 to add the same in a motion vector predictor list, select an ideal motion vector predictor from the plurality of vector predictor candidates Petition 870190021217, of 03/01/2019, p. 61/536 58/150 motion added to the motion vector predictor list, produces a motion vector difference from the motion vector predictor and the motion vector detected by motion vector detector 102 and provides the motion vector difference derived motion for the forecast method determination unit 107. In addition, the motion vector difference derivation unit 103 provides a motion vector predictor index to specify the motion vector predictor selected from the predictor candidates motion vector predictions added to the motion vector predictor list for the forecast method determination unit 107. [00113] Interprevision information derivation unit 104 produces merge candidates in a merge mode. Interpretation information derivation unit 104 produces a plurality of merge candidates using the encoding information from the encoded forecast blocks stored as encoding information storage memory 115 to add it to a merge candidate list described below, selects an ideal merge candidate from the plurality of merge candidates added to the merge candidate list, provides interpretation information as predFlagL0 [xP] [yP] and predFlagL1 [xP] [yP] flags indicating when or not to use L0 forecast and the L1 forecast of each forecast block of the selected merge candidate, reference signal refIdxL0 [xP] [yP] and refIdxL1 [xP] [yP], and motion vectors mvL0 [xP] [yP] and mvL1 [xP] [yP] for the compensated motion prediction unit 105 and provides a merge index to specify the selected merge candidate to the motion determination unit forecast method 107. Here, xP and yP are signals indicating the position of the top left pixel of a forecast block in the image. THE Petition 870190021217, of 03/01/2019, p. 62/536 59/150 detailed configuration and operations of the interpretation information derivation unit 104 will be described later. [00114] The compensated motion prediction unit 105 constructs an image prediction signal by performing interprevision (compensated motion prediction) from reference images using the motion vector detected by the motion vector detector 102 and the unit derivation of interpretation information 104 and provides the image forecast signal for the forecast method determination unit 107. In forecast L0 and forecast L1, the forecast is performed in one direction. In the case of double forecasting (Pred_BI), forecasting is carried out in two directions to obtain interpretation signals in L0 and L1 forecasting modes that are adaptively multiplied by a weighting factor and are superimposed by adding a correction value to build a final image prediction signal. [00115] The intra-forecast unit 106 performs intra-forecast in the respective intra-forecast modes. The intraprevention unit 106 constructs image prediction signals by performing intraprevision from the decoded image signals stored in the decoded image memory 116, selects an ideal intrapreviction mode from a plurality of intraprevision modes and provides a prediction signal. image corresponding to the intra-forecast mode selected for the forecast method determination unit 107. [00116] The prediction method determination unit 107 evaluates encoding information, a residual signal encoding amount and an amount of distortion between an image forecast signal and an image signal of each forecast method to determine a PartMode split mode and a PredMode forecast mode to identify the forecast mode and interpretation Petition 870190021217, of 03/01/2019, p. 63/536 60/150 (PRED_INTER) or intraprevision (PRED_INTRA) for each ideal coding block among a plurality of forecasting methods, determine if the interprevision (PRED_INTER) is a blending mode in the respective forecast blocks, determine a blending index when the Interprevision is the blending mode and determining an interprevision mode, a motion vector predictor index, a reference signal L0 and L1, a motion vector difference and the like when the interprevision is not the blending mode and provides information codes corresponding to the determination for the second bit stream construction unit 110. [00117] Additionally, the forecast method determination unit 107 stores coding information including information indicating the determined forecast method and a motion vector and the like corresponding to the determined forecast method in the coding information storage memory 115. The encoding information stored in the present invention includes a PredMode prediction mode for each encoding block, a PartMode split mode, predFlagL0 [xP] [yP] and predFlagL1 [xP] [yP] flags indicating when or not to use L0 and the L1 forecast of each forecast block, reference signal L0 and L1 refIdxL0 [xP] [yP] and refIdxL1 [xP] [yP] and L0 and motion vector L1s mvL0 [xP] [yP] and mvL1 [xP] [ yP]. Here, xP and yP are signals indicating the position of the top left pixel of a forecast block in the image. When the PredMode forecast mode is intra-forecast (MODE_INTRA), both the predFlagL0 [xP] [yP] flag indicating whether or not to use the L0 forecast and the predFlagL1 [xP] [yP] flag indicating whether or not to use L1 forecast are 0. On the other hand, when the PredMode forecast mode is forecast (MODE_INTER) and the forecast mode is L0 forecast (Pred_L0), the flag Petition 870190021217, of 03/01/2019, p. 64/536 61/150 predFlagL0 [xP] [yP] indicating whether or not to use L0 forecast is 1 and the predFlagL1 [xP] [yP] flag indicating whether or not to use L1 forecast is 0. When the interpretation mode is L1 forecast ( Pred_L1), the predFlagL0 [xP] [yP] flag indicating whether or not to use L0 forecast is 0 and the predFlagL1 [xP] [yP] flag indicating whether or not to use L1 forecast is 1. When the interpretation mode is forecast double (Pred_BI), both the predFlagL0 [xP] [yP] flag indicating whether or not to use L0 forecast and the predFlagL1 [xP] [yP] flag indicating whether or not to use L1 forecast is 1. The method determination unit forecast 107 provides the image forecast signal corresponding to the predicted mode determined for the residual signal building unit 108 and for the decoded image signal overlay unit 114. [00118] Residual signal construction unit 108 constructs a residual signal by subtracting between an image signal that must be encoded and the image prediction signal and provides the same for the orthogonal transformation and quantization unit 109. [00119] The orthogonal transformation and quantization unit 109 performs orthogonal transformation and quantization in the residual signal according to a quantization parameter to construct an orthogonally transformed and quantized residual signal and supply the same for the third bit stream construction unit 111 and for decanting reverse-orthogonal transformation unit 113. Additionally, the orthogonal transformation and quantization unit 109 stores the quantization parameter in the coding information storage memory 115. [00120] The first bit stream construction unit 118 encodes the information in sequence, image and slice units established for the header information definition unit Petition 870190021217, of 03/01/2019, p. 65/536 62/150 117 in order to build a first bit stream and provide the same for multiplexer 112. The first bit stream construction unit 118 also encodes a larger number of maxNumMergeCand merge candidates described later. [00121] The second bit stream construction unit 110 encodes the encoding information corresponding to the forecast method determined by the forecast method determination unit 107 for each coding block and each forecast block. Specifically, the second bit stream construction unit 110 encodes encoding information according to a predetermined syntax rule described later to construct a second bit stream and provide the same for multiplexer 112, encoding information including information to identify if each coding block is a forward mode, a PredMode forecast mode to identify inter-forecast (PRED_INTER) and intra-forecast (PRED_INTRA), a PartMode split mode, an intra-forecast mode when the forecast mode is intra-forecast (PRED_INTRA), a flag to identify whether the preview (PRED_INTER) is a blend mode, a blend index when the preview mode is a blend mode and a preview mode, a motion vector predictor and information on the motion vector difference when the interpretation mode is not a blend mode. In the present mode, when the coding block is a forward mode (the syntax element skip_flag [x0] [y0] is 1), the PredMode forecast mode value of a forecast block is interprevision (MODE_INTER), a mode merge_flag [x0] [y0] is 1), and the split mode (PartMode) is split 2Nx2N (PART_2Nx2N). [00122] The third bit stream construction unit 111 performs entropy coding on the orthogonally transformed residual signal Petition 870190021217, of 03/01/2019, p. 66/536 63/150 and quantized according to a predetermined syntax rule to construct a third bit stream and provide the same for multiplexer 112. Multiplexer 112 multiplies the first, second and third bit streams according to the predetermined syntax rule and multiplexed bit stream emissions. [00123] The reverse-orthogonal transformation and decanting unit 113 performs decanting and reverse-orthogonal transformation on the orthogonally transformed and quantized residual signal provided from the orthogonal transformation and quantization unit 109 in order to build the residual signal and supply the same to the unit decoded image signal overlay unit 114. The decoded image signal overlay unit 114 overlays the image forecast signal corresponding to the determination of the forecast method determination unit 107 and the residual signal decoupled and orthogonally transformed by the unit reverse-orthogonal transformation and decanting 113 to build a decoded image and store it in decoded image memory 116. A filtering process to reduce distortion such as block distortion resulting from encoding can be applied to the decoded image and the resulting image The amount can be stored in the decoded image memory 116. [00124] FIG. 2 is a block diagram illustrating a configuration of a moving picture decoding device, according to an embodiment of the present invention, corresponding to the moving picture coding device of FIG. 1. The moving image decoding device of the mode includes a demultiplexer 201, a first bit stream decoder 212, a second bit stream decoder 202, a third bit stream decoder 203, a unit Petition 870190021217, of 03/01/2019, p. 67/536 64/150 of motion vector derivation 204, an interpreter information derivation unit 205, a compensated motion forecast unit 206, an intraprevision forecast unit 207, an inverse-orthogonal transformation and decanting unit 208, a unit of overlapping decoded image signal 209, an encoding information storage memory 210 and a decoded image memory 211. [00125] Since the decoding process of the moving image decoding device illustrated in FIG. 2 corresponds to the decoding process carried out on the moving image encoding device illustrated in FIG. 1, the respective components of the compensated motion prediction unit 206, the reverse-orthogonal transformation and decanting unit 208, the decoded image signal overlay unit 209, the encoding information storage memory 210 and the image memory decoded 211 illustrated in FIG. 2 have the functions corresponding to the respective components of the compensated motion prediction unit 105, the reverse-orthogonal transformation and decanting unit 113, the decoded image signal overlay unit 114, the encoding information storage memory 115 and the decoded image memory 116 of the moving image encoding device illustrated in FIG. 1. [00126] The bit stream provided for demultiplexer 201 is demultiplexed according to a predetermined syntax rule and the demultiplex bit stream is provided for the first, second and third bit stream decoders 212, 202, and 203. [00127] The first bit stream decoder 212 decodes the provided bit stream to obtain information in sequence, image and slice units. Sequence, image and units information Petition 870190021217, of 03/01/2019, p. 68/536 65/150 slices obtained are provided for all blocks, although not shown in the drawing. The first bit stream decoder 212 also decodes a greater number of maxNumMergeCand merge candidates described later. [00128] The second bit stream decoder 202 decodes the provided bit stream to obtain information in coding block units and coding information in forecasting block units. Specifically, the second bit stream decoder 202 decodes encoding information according to the predetermined syntax rule to obtain encoding information, stores the encoding information as the PredMode decoded prediction mode and the PartMode decoded split mode in memory. coding information store 210 and provides the same for the motion vector derivation unit 204, the interpreter information derivation unit 205 or the intraprevision information unit 207, the coding information including information to identify whether each coding block is a forward mode, a PredMode forecast mode to identify whether the forecast mode is inter-forecast (PRED_INTER) or intra-forecast (PRED_INTRA), a PartMode split mode, a flag to identify whether inter-forecast (PRED_INTER) is a blending mode, a merge index when the interpretation it is a blend mode and an interpretation mode, an index motion vector predictor, and a motion vector difference when the interpretation is not a blend mode. In the present mode, when the coding block is a forward mode (the syntax element skip_flag [x0] [y0] is 1), the PredMode forecast mode value of a forecast block is interprevision (MODE_INTER), a mode merge_flag [x0] [y0] is 1) and the split mode (PartMode) is split 2Nx2N (PART_2Nx2N). Petition 870190021217, of 03/01/2019, p. 69/536 66/150 [00129] The third bit stream decoder 203 decodes the supplied bit stream to produce an orthogonally transformed and quantized residual signal and supplies the orthogonally transformed and quantized residual signal to the reverse-orthogonal transformation and decanting unit 208. [00130] When the PredMode prediction mode of a decoding target prediction block is not the interpreter (PRED_INTER) or blending mode, the motion vector derivation unit 204 produces a plurality of vector predictor candidates motion using the encoding information of the decoded image signal stored in the encoding information storage memory 210 to add them to a motion vector predictor list described later, and select a motion vector predictor corresponding to the predictor index motion vector icon decoded and provided by the second bit stream decoder 202 out of the plurality of motion vector predictor candidates added to the motion vector predictor list, produces a motion vector from the selected motion vector predictor and the vector difference decoded by the second bit stream decoder 202, provides the same for the compensated motion prediction unit 206 along with other coding information items and stores it in the coding information storage memory 210. The coding information from the provided prediction block and stored in the present invention includes flags predFlagL0 [xP] [yP] and predFlagL1 [xP] [yP] indicating whether or not to use L0 forecast and L1 forecast, reference signal L0 and L1 refIdxL0 [xP] [yP] and refIdxL1 [xP] [yP], and L0 and L1s motion vector mvL0 [xP] [yP] and mvL1 [xP] [yP]. Here, xP and yP are signals indicating the position of the top left pixel of a forecast block in the image. When the Petition 870190021217, of 03/01/2019, p. 70/536 67/150 prediction PredMode is interprevision (MODE_INTER) and the interpretation mode is prediction L0 (Pred_L0), a predFlagL0 flag indicating whether or not to use prediction L0 is 1 and a predFlagL1 flag indicating whether or not to use prediction L1 is 0 When the interpretation mode is L1 forecast (Pred_L1), a predFlagL0 flag indicating whether or not to use L0 forecast is 0 and a predFlagL1 flag indicating whether or not to use L1 forecast is 1. When the interpretation mode is double forecast ( Pred_BI), both the predFlagL0 flag indicating whether or not to use L0 forecast and the predFlagL1 flag indicating whether or not to use L1 forecast are 1. [00131] The 205 interpretation information derivation unit produces merge candidates when the PredMode forecast mode of a decoding target forecast block is interpreter (PRED_INTER) and a merge mode. The interpretation information derivation unit 205 produces a plurality of merge candidates using the decoded encoding information from the forecast block stored in the encoding information storage memory 115 to add it to a merge candidate list described later, selects a merge candidate corresponding to the decoded merge index and provided by the second bit stream decoder 202 from the plurality of merge candidates added to the merge candidate list, provides interpretation information including predFlagL0 [xP] [yP] flags and predFlagL1 [xP] [yP] indicating whether or not to use L0 forecast and L1 forecast of the selected merge candidate, reference signal L0 and L1 refIdxL0 [xP] [yP] and refIdxL1 [xP] [yP], and L0 and vector motion L1s mvL0 [xP] [yP] and mvL1 [xP] [yP] to the 206 compensated motion prediction unit and stores the same in the memory of Petition 870190021217, of 03/01/2019, p. 71/536 68/150 coding information store 210. Here, xP and yP are signals indicating the position of the top left pixel of a forecast block in the image. The detailed configuration and operations of the 205 interpretation information derivation unit will be described later. [00132] The compensated motion prediction unit 206 constructs an image prediction signal by performing interpretation (compensated motion prediction) from the reference images stored in the decoded image memory 211 using the interpretation information produced by the imaging unit. motion vector derivation 204 or the interprevision information derivation unit 205 and provides the image prediction signal to the decoded image signal overlay unit 209. In the case of dual prediction (Pred_BI), the motion prediction offset it is performed in two forecast modes L0 and L1 forecast to obtain compensated motion forecast image signals that are multiplied adaptively by a weighting factor and are superimposed in order to build a final image forecast signal. [00133] The intra-forecast unit 207 performs intra-forecast when the PredMode forecast mode of the decoding target forecast block is intra-forecast (PRED_INTRA). The encoding information decoded by the second bit stream decoder 202 includes an intraprevention mode and the intraprevention unit 207 constructs an image prediction signal by performing intraprediction from the decoded image signal stored in the decoded image memory 211 of according to the intra-preview mode and provides the image forecast signal for the decoded image signal overlay unit 209. Both the predFlagL0 [xP] [yP] and predFlagL1 [xP] [yP] flags indicating Petition 870190021217, of 03/01/2019, p. 72/536 69/150 whether or not to use L0 forecast and L1 forecast are set to 0 and are stored in the 210 coding information storage memory. Here, xP and yP are signals indicating the position of the top left pixel of a forecast block in the image. [00134] The reverse-orthogonal transformation and decanting unit 208 performs decanting and reverse-orthogonal transformation on the orthogonally transformed and quantized residual signal decoded by the second bit stream decoder 202 to obtain an unquantized and reverse and orthogonally transformed residual signal. [00135] The decoded image signal overlay unit 209 superimposes the image preview signal interpreted by the compensated motion prediction unit 206 or the intra image prediction signal by the intraprevision unit 207 on the reverse and unquantified residual signal and transformed orthogonally by the reverse-orthogonal transformation and decanting unit 208 to decode a decoded image signal and store it in the decoded image memory 211. When the decoded image signal is stored in the decoded image memory 211, a filtering process to reduce a block distortion or similar resulting from encoding can be performed on the decoded image and stored in the decoded image memory 211. SYNTAX [00136] Then, a syntax that is a common rule for encoding and decoding a moving image bit stream that is encoded by a moving image encoding device that employs a motion vector prediction method according to the present mode and decoded by a decoding device will be described. [00137] In the present modality, the unit of definition of Petition 870190021217, of 03/01/2019, p. 73/536 70/150 header information 117 establishes a greater amount of maxNumMergeCand merge candidates added to the mergeCandList merge candidate list in sequence, image or slice units and syntax elements are encoded by the device's first bit stream construction unit 118 moving image encoding and are decoded by the first bit stream decoder 212 of the moving image decoding device. A value from 0 to 5 can be adjusted for the largest number of maxNumMergeCand merge candidates and, in particular, a small value is adjusted for the largest number of maxNumMergeCand merge candidates when a moving image encoding device processing amount must be reduced. When 0 is set for the largest number of maxNumMergeCand merge candidates, the predetermined interpreter information is used as a merge candidate. In the description of the present modality, the largest number of maxNumMergeCand merge candidates is set to 5. [00138] FIG. 10 illustrates a syntax rule described in forecast block units. In the present mode, when the coding block is a forward mode (the syntax element skip_flag [x0] [y0] is 1), the PredMode forecast mode value of the forecast block is interprevision (MODE_INTER) and the merge (merge_flag [x0] [y0] is 1) and the split mode (PartMode) is split 2Nx2N (PART_2Nx2N). When the merge_flag [x0] [y0] flag is 1, it indicates that the forecast mode is a blend mode. When the value for the largest number of merge candidates maxNumMergeCand is greater than 1, a merge_idx [x0] [y0] syntax element of a merge list index which is the merge candidate list that must be cited is provided. When Petition 870190021217, of 03/01/2019, p. 74/536 71/150 the skip_flag [x0] [y0] flag is 1, indicating that the coding block is a forward mode. When the value of the largest number of merge candidates maxNumMergeCand is greater than 1, a merge_idx [x0] [y0] syntax element of a merge list index which is a list of merge candidates that must be quoted is provided. [00139] When the PredMode forecast mode value of a forecast block is interprevision (MODE_INTER), a merge_flag [x0] [y0] flag indicating whether the forecast block is a blending mode is provided. Here, x0 and y0 are signals indicating the position of a pixel in the top left corner of a forecast block in an image of a light signal and the merge_flag [x0] [y0] flag is a flag indicating whether the forecast block is positioned in (x0, y0) in the image is a blending mode. [00140] Subsequently, when the merge_flag [x0] [y0] flag is 1, it indicates that the forecast block is a blending mode. When the value for the greatest number of merge candidates maxNumMergeCand is 1, a merge_idx [x0] [y0] syntax element of a merge list index which is a merge candidate list that must be quoted is provided. Here, x0 and y0 are signs indicating the position of a pixel in the top left corner of a forecast block in the image and an index merge_idx [x0] [y0] is a merge index of a forecast block positioned at (x0, y0) in the image. When a merge index is subjected to entropy encoding and decoding, the least amount of merge candidates, the least amount of encoding and the least amount of processing with which the encoding / decoding can be performed. FIG. 11 illustrates an example of an entropy symbol (code) of the merge_idx [x0] [y0] syntax element of a merge index. When the greatest number of merge candidates Petition 870190021217, of 03/01/2019, p. 75/536 72/150 is 2 and the merge signals are 0 and 1, the merge_idx [x0] [y0] syntax element symbols for the merge index are '0' and '1', respectively. When the greatest number of merge candidates is 3 and the merge signs are 0, 1, and 2, the merge_idx [x0] [y0] syntax element symbols in the merge index are '0', '10', and '11', respectively. When the greatest number of merge candidates is 4 and the merge signals are 0, 1, 2, and 3, the merge_idx [x0] [y0] symbols in the merge index are '0', '10', '110 ', and' 111 ', respectively. When the greatest number of merge candidates is 5 and the merge signals are 0, 1, 2, 3, and 4, the merge_idx [x0] [y0] symbols in the merge index are '0', '10', '110', '1110', and '1111', respectively. That is, when the largest number of maxNumMergeCand merge candidates added to the mergeCandList merge candidate list is known, a merge index that has the highest and lowest number of maxNumMergeCand merge candidates can be represented by the least amount of encoding. In the present embodiment, as illustrated in FIG. 11, the amount of encoding of the merge signal is reduced by switching symbols indicating the values of the merge signal, according to the number of merge candidates. In the present modality, a merge index that has a value greater than or equal to the value of the largest number of merge candidates maxNumMergeCand cannot be encoded or decoded. When the maximum number of maxNumMergeCand merge candidates is 1, the merge index is not encoded / decoded and the merge index is 0. In addition, the largest number of merge candidates is 0, the merge index is not necessary since the predetermined interpretation information is used as the merge candidate. Petition 870190021217, of 03/01/2019, p. 76/536 73/150 [00141] On the other hand, when the merge_flag [x0] [y0] flag is 0, it indicates that the forecast mode is not a blend mode. When the slice type is slice B, an inter_Pred_signalizer syntax element [x0] [y0] to identify an interpretation mode is provided and the L0 forecast (Pred_L0), the L1 forecast (Pred_L1) and the double forecast (Pred_BI) are identified by the syntax element. A ref_idx_l0 [x0] [y0] and ref_idx_l1 [x0] [y0] syntax element of a reference index to identify a reference image and a syntax element mvd_l0 [x0] [y0] [j] and mvd_l1 [x0] [y0] [j] of a motion vector difference which is a difference between the motion vector predictor and the motion vector of the prediction block obtained by motion vector detection are provided for the respective lists L0 and L1. Here, x0 and y0 are signals indicating the position of a pixel in the top left corner of a forecast block in the image, ref_idx_l0 [x0] [y0] and mvd_l0 [x0] [y0] [j] are the reference index L0 and the motion vector difference of the forecast block positioned at (x0, y0) in the image, respectively, and ref_idx_l1 [x0] [y0] and mvd_l1 [x0] [y0] [j] are the reference index L1 and a difference of motion vector of the forecast block positioned at (x0, y0) in the image, respectively. In addition, j indicates the component of the motion vector difference, j = 0 indicates an x component, and j = 1 indicates a y component. Subsequently, an element of syntax mvp_idx_l0 [x0] [y0] and mvp_idx_l1 [x0] [y0] of an index of a motion vector predictor list which is a list of motion vector predictor candidates that must be cited are provided. Here, x0 and y0 are signals indicating the position of a pixel in the top left corner of a forecast block in the image, and mvp_idx_l0 [x0] [y0] and mvp_idx_l1 [x0] [y0] are a vector predictor index of movement L0 and L1 of the forecast block positioned at (x0, y0) in the image. In the present embodiment of the present invention, the value Petition 870190021217, of 03/01/2019, p. 77/536 74/150 of the number of these candidates is adjusted to 2. [00142] A method of producing interpretation information, according to the modality, is performed by the interpretation information derivation unit 104 of the moving image encoding device illustrated in FIG. 1 and the interpretation information derivation unit 205 of the moving image decoding device illustrated in FIG. 2. [00143] A method of producing interpretation information, according to the modality, will be described with reference to the drawings. A motion vector prediction method is performed in the process of encoding and decoding units of forecast blocks that constitute a coding block. When the PredMode forecast mode of a forecast block is interprevision (MODE_INTER) and a blend mode including a drive mode, the motion vector forecast method is performed, when a forecast mode is produced, a reference index and a motion vector of a coding target forecast block using a forecast mode, a reference index and a motion vector of a coded forecast block if coded and is performed when a forecast mode is produced, a reference index and motion vector of a decoding target forecast block using a forecast mode, a reference index and motion vector of a decoded forecast block in case of decoding. [00144] In merge mode, merge candidates are derived from forecast blocks including a Col forecast block (T0 or T1) present in the same position at a different time or close to a coding target forecast block described with reference to FIG. 9 in addition to the five forecast blocks of forecast block A near the left side, the forecast block B near the top side, the forecast block C near the top right corner, Petition 870190021217, of 03/01/2019, p. 78/536 75/150 the forecast block D near the bottom left corner and the forecast block E near the top left corner described with reference to FIGS. 5, 6, 7, and 8. Interpretation information derivation unit 104 of the moving image encoding device and Interpretation information derivation unit 205 of the moving image decoding device add these merge candidates to the merge candidate list in the same predetermined procedure on the encoder and decoder sides. Interpretation information derivation unit 104 of the moving image encoding device determines the merge index to identify elements of the merge candidate list to perform encoding with the aid of the second bit stream construction unit 110. A Interpretation information derivation unit 205 of the moving image decoding device receives the merge index decoded by the second bit stream decoder 202, selects the forecast block corresponding to the merge index from the merge candidate list and performs compensated movement prediction using the interprevision information such as the forecast mode, reference index and motion vector of the selected blend candidate. [00145] A method of producing interpretation information according to a first practical example of the modality will be described with reference to the drawings. FIG. 12 is a diagram illustrating a detailed configuration of the interpretation information derivation unit 104 of the moving image encoding device illustrated in FIG. 1 according to the first practical example of the modality. FIG. 13 is a diagram illustrating a detailed configuration of the interpretation information derivation unit 205 of the moving image decoding device illustrated in Petition 870190021217, of 03/01/2019, p. 79/536 76/150 FIG. 2 according to the first practical example of the modality. [00146] The parts surrounded by a frame described by a solid bold line in FIGS. 12 and 13 indicate the interpreting information derivation unit 104 and the interpreting information derivation unit 205, respectively. [00147] Additionally, the parts surrounded by a bold dotted line inside the frames indicate a merge candidate list building unit 120 of the moving image encoding device and a merge candidate list building unit 220 of the moving image decoding device that produces merge candidates to build a merge candidate list. It is provided in a moving image decoding device corresponding to the modality of moving image coding of the modality, so that the same result of consistent determination in the coding and decoding is obtained. [00148] In a method of producing interpretation information, according to the modality, in a merge candidate production process and a merge candidate construction process list of the merge candidate list construction unit 120 from the moving image encoding device and the merge candidate list building unit 220 of the moving image decoding device, merge candidates from a processing target prediction block are derived and a merge candidate list it is constructed without referring to a forecast block included in the same coding block as a coding block that includes the processing target forecast block. In doing so, when the division mode (PartMode) of an encoding block is not 2Nx2N (PART_2Nx2N) division (that is, when a plurality of blocks of Petition 870190021217, of 03/01/2019, p. 80/536 77/150 forecast is present in a coding block), the encoder can perform the merge candidate production process and the merge candidate construction process list in parallel for each forecast block in a coding block. [00149] The parallel process for producing the candidate merge list for each forecast block in an encoding block will be described for each division mode (PartMode) with reference to FIGS. 14A to 14H. FIGS. 14A through 14H are diagrams to describe a forecast block next to a processing target forecast block for each split mode (PartMode) of a processing target coding block. In FIGS. 14A to 14H, A0, B0, C0, D0, and E0 indicate a forecast block A near the left side, a forecast block B near the top side, a forecast block C near the top right corner, a block of forecast D near the bottom left corner, and a forecast block E near the top left corner of each processing target forecast block whose PartIdx split index is 0, respectively. In addition, A1, B1, C1, D1, and E1 indicate a forecast block A near the left side, a forecast block B near the top side, a forecast block C near the top right corner, a forecast block D near the bottom left corner and an E forecast block next to the top left corner of each processing target forecast block whose PartIdx split index is 1, respectively. In addition, A2, B2, C2, D2, and E2 indicate a forecast block A near the left side, a forecast block B near the top side, a forecast block C near the top right corner, a forecast block D near the bottom left corner and a forecast block E near the top left corner of each processing target forecast block whose PartIdx split index is 2, respectively. Additionally A3, B3, C3, D3, and E3 indicate a forecast block A next to the side Petition 870190021217, of 03/01/2019, p. 81/536 78/150 left, a forecast block B near the top side, a forecast block C near the top right corner, a forecast block D near the bottom left corner and a forecast block E near the top left corner of each processing target forecast block whose PartIdx split index is 3, respectively. [00150] FIGS. 14B, 14C, and 14D are diagrams that illustrate neighboring forecast blocks when the split mode (PartMode) for dividing a processing target coding block into two forecast blocks arranged in the vertical direction is 2NxN (PART_2NxN), 2NxnU division (PART_2NxnU) and 2NxnD division (PART_2NxnD), respectively. A forecast block B1 next to a processing target forecast block that has partIdx 1 is a forecast block that has partIdx 0. That way, when the merge candidate production process and the construction process list candidate blends are performed for the forecast block that has partIdx 1 referring to the forecast block B1, the processes cannot be performed unless the merge candidate production process and the construction process list of merge candidate for the forecast block that has partIdx 0 belonging to the same coding block that is the forecast block B1 and the merge candidates that are to be used are specified. Thus, in the method of production of information of interpretation, according to the modality, when the division mode (PartMode) is division 2NxN (PART_2NxN), division 2NxnU (PART_2NxnU) and division 2NxnD (PART_2NxnD) and PartIdx of the forecast block processing target is 1, performing the merge candidate production process and the merge candidate construction process list for the forecast block that has partIdx 1 without referring to the forecast block B1 which is the forecast block that has partIdx 0, it is possible to carry out the production process Petition 870190021217, of 03/01/2019, p. 82/536 79/150 merge candidate and the merge candidate build process list for two forecast blocks in the parallel coding block. [00151] FIGS. 14E, 14F, and 14G are diagrams that illustrate neighboring forecast blocks when the split mode (PartMode) for dividing a processing target coding block into two forecast blocks arranged in the horizontal direction is Nx2N division (PART_Nx2N), nLx2N division (PART_nLx2N) and division nRx2N (PART_nRx2N), respectively. A forecast block A1 next to a processing target forecast block that has partIdx 1 is a forecast block that has partIdx 0. That way, when the merge candidate production process and the construction process list candidate blends are performed for the forecast block that has partIdx 1 referring to the forecast block A1, processes cannot be performed unless the merge candidate production process and the construction process list of merge candidate for the forecast block that has partIdx 0 belonging to the same coding block that is the forecast block A1 and the merge candidates that are to be used are specified. Thus, in the method of production of information of interpretation, according to the modality, when the division mode (PartMode) is division Nx2N (PART_Nx2N), division nLx2N (PART_nLx2N) and division nRx2N (PART_nRx2N) and PartIdx of the forecast block processing target is 1, performing the merge candidate production process and the merge candidate construction process list for the forecast block that has partIdx 1 without referring to the forecast block A1 which is the forecast block that has partIdx 0, you can perform the merge candidate production process and the merge candidate build process list for two forecast blocks in the Petition 870190021217, of 03/01/2019, p. 83/536 80/150 parallel coding. [00152] FIG. 14H is a diagram illustrating neighboring forecast blocks when the split mode (PartMode) for dividing a processing target coding block into four forecast blocks both in the vertical and horizontal direction is NxN (PART_NxN). A forecast block A1 next to a processing target forecast block that has partIdx 1 is a forecast block that has partIdx 0. That way, when the merge candidate production process and the construction process list candidate blends are performed for the forecast block that has partIdx 1 referring to the forecast block A1, processes cannot be performed unless the merge candidate production process and the construction process list of merge candidate for the forecast block that has partIdx 0 belonging to the same coding block that is the forecast block A1 are completed and the merge candidates that should be used are specified. Thus, in the method of production of interpretation information, according to the modality, when the division mode (PartMode) is NxN division (PART_NxN) and PartIdx of the processing target forecast block is 1, carrying out the process merge candidate production list and the merge candidate build process list for the forecast block that has partIdx 1 without referring to the forecast block A1 which is the forecast block that has partIdx 0, it is possible to perform the merge candidate production process and the merge candidate construction process list for the respective forecast blocks in the parallel coding block. A forecast block B2 next to a processing target forecast block that has partIdx 2 is a forecast block that has partIdx 0 and a forecast block C2 is a forecast block that has partIdx 1. Thus, when the Petition 870190021217, of 03/01/2019, p. 84/536 81/150 merge candidate production process and the merge candidate construction process list are performed for the forecast block that has partIdx 2 referring to forecast blocks B2 and C2, the processes cannot be performed , unless the merge candidate production process and merge candidate build process list for forecast blocks that have partIdx 0 and 1 belong to the same coding blocks that are forecast blocks B2 and C2 are completed and the merge candidates to be used are specified. Thus, in the method of production of interpretation information, according to the modality, when the division mode (PartMode) is NxN division (PART_NxN) and PartIdx of the processing target forecast block is 2, carrying out the process merge candidate production list and merge candidate build process list for the forecast block that has partIdx 2 without referring to forecast blocks B2 and C2 which are the forecast blocks that have partIdx 0 and 1, it is possible to perform the merge candidate production process and the merge candidate construction process list for the respective forecast blocks in the parallel coding block. An E3 forecast block next to a processing target forecast block that has partIdx 3 is a forecast block that has partIdx 0, a B3 forecast block is a forecast block that has PartIdx 1 and a forecast block A3 is a forecast block that has partIdx 2. Thus, when the merge candidate production process and the merge candidate construction process list are performed for the forecast block that has partIdx 3 referring to to forecast blocks E3, B3, and A3, processes cannot be performed unless the merge candidate production process and the merge candidate build process list for the forecast blocks Petition 870190021217, of 03/01/2019, p. 85/536 82/150 that have partIdx 0, 1, and 2 belonging to the same coding blocks as forecast blocks E3, B3, and A3 are completed and the merge candidates that should be used are specified. Thus, in the method of producing interprevision information, according to the modality, when the division mode (PartMode) is NxN division (PART_NxN) and PartIdx of the processing target forecast block is 3, carrying out the process merge candidate production list and merge candidate build process list for the forecast block that has partIdx 3 without referring to forecast blocks E3, B3, and A3 which are the forecast blocks that have the partIdx 0, 1, and 2, it is possible to carry out the merge candidate production process and the merge candidate construction process list for the respective forecast blocks in the parallel coding block. [00153] The interpretation information derivation unit 104 of the moving image encoding device illustrated in FIG. 12 includes a merge candidate list building unit 130, a spatial merge candidate build unit 131, a merge candidate time benchmark derivation unit 132, a time merge candidate derivation unit 133 , an additional merge candidate derivation unit 134, a merge candidate limiting unit 136, and an interpreter information selection unit 137. [00154] The interpretation information derivation unit 205 of the moving image decoding device illustrated in FIG. 13 includes a merge candidate list building unit 230, a spatial merge candidate build unit 231, a merge candidate time reference index derivation unit 232, a Petition 870190021217, of 03/01/2019, p. 86/536 83/150 time merge candidate derivation 233 an additional merge candidate derivation unit 234, a merge candidate limiting unit 236, and an interpreter information selection unit 237. [00155] FIG. 15 is a flow chart to describe the flow of a merge candidate derivation process and a merge candidate list building process, which are the common functions of the merge candidate list building unit 120 of the derivation unit interpreting information 104, the moving image encoding device, and the merge candidate list building unit 220 of the interpreting information derivation unit 205 of the moving image decoding device, according to the first practical example of the embodiment of the present invention. [00156] From now on, the respective processes will be described in sequence. In the following description, although a case where the type of slice slice_type is slice B is described, except where otherwise stated, the same can be applied to slice P. However, when the type of slice slice_type is slice P, since the interpretation mode includes L0 forecast (Pred_L0) only and does not include L1 forecast (Pred_L1) and double forecast (Pred_BI), the processes associated with L1 can be omitted. In the present modality, in the moving image encoding device and in the moving image decoding device, when the value of the largest number of maxNumMergeCand merge candidates is 0, the merge candidate derivation process and the process of building list of merge candidates in FIG. 15 can be omitted. [00157] First, the merge candidate list building unit 130 of the information derivation unit Petition 870190021217, of 03/01/2019, p. 87/536 84/150 of interpretation 104 of the moving image encoding device, and the merge candidate list building unit 230 of the interpretation information derivation unit 205 of the moving image decoding device creates a list of candidates mergeCandList merge (step S100 of FIG. 15). The mergeCandList merge candidate list has a list structure and includes a merge index that indicates the locations in the merge candidate list and a storage area that stores a merge candidate that matches an index as an element. The merge index number starts with 0, and a merge candidate is stored in the mergeCandList merge candidate list storage area. In the following process, a forecast block, which serves as a merge candidate that corresponds to a merge index i added to the mergeCandList merge candidate list, is expressed as mergeCandList [i] in order to distinguish the disposition notation from from the mergeCandList merge candidate list. In the present modality, it is assumed that the mergeCandList merge candidate list can add at least five merge candidates (interpretation information). Additionally, 0 is set to a numMergeCand variable that indicates the number of merge candidates added to the mergeCandList merge candidate list. The list of merge candidates created mergeCandList is supplied for the spatial merge candidate construction unit 131 of the interpretation information derivation unit 104 of the moving image encoding device, and the spatial merge candidate construction unit. 231 of the interpretation information derivation unit 205 of the moving image decoding device. Petition 870190021217, of 03/01/2019, p. 88/536 85/150 [00158] The spatial merge candidate construction unit 131 of the interpreter information derivation unit 104 of the moving image encoding device, and the spatial merge candidate construction unit 231 of the derivation unit Interpretation information 205 from the moving image decoding device derives spatial merge candidates A, B, C, D and E from the respective forecast blocks A, B, C, D and E next to the target coding block / decoding from the encoding information stored in the encoding information storage memory 115 of the moving image encoding device or in the encoding information storage memory 210 of the moving image decoding device and adding spatial merge candidates derivatives to the mergeCandList merge candidate list (step S101 of FIG. 15). Here, N indicating A, B, C, D, E, or any of the time merge candidates Col, is defined. An availableFlagN flag that indicates whether the forecast information for a forecast block N can be used as a spatial merge candidate N, an L0 reference index refIdxL0N, and an L1 reference index refIdxL1N of the spatial merge candidate N, a prediction L0 predFlagL0N that indicates whether or not to execute L0 forecast, an L1 forecast flag predFlagL1N that indicates whether or not to execute L1 forecast, a motion vector L0 mvL0N and a motion vector L1 mvL1N are derived. However, in the present embodiment, since merge candidates are derived without referring to a forecast block included in the same coding block as the coding block that includes a processing target forecast block, spatial merge candidates included in the same coding block with the coding block that includes the Petition 870190021217, of 03/01/2019, p. 89/536 86/150 processing target forecast block are not derived. The detailed process flow of step S101 will be described later with reference to the flowchart of FIG. 16. The mergeCandList merge candidate list is provided for the time merge candidate derivation unit 133 of the interpreter information derivation unit 104 of the moving image encoding device, and the merge candidate derivation unit. time 233 of the interpretation information derivation unit 205 of the moving image decoding device. [00159] Subsequently, the time merge candidate reference index derivation unit 132 of the interpretation information derivation unit 104, the moving image encoding device, and the candidate index reference derivation unit of Time merge 232 of the Interprevision Information Deriving Unit 205 of the moving image decoding device derives the reference indices of time merge candidates from forecast blocks near the target coding / decoding block and supplies the reference indices derivatives for the time merge candidate derivation unit 133 of the interpretation information derivation unit 104 of the moving image encoding device, and the time merge candidate derivation unit 233 of the interpretation information derivation unit 205 of the deco device motion image modification (step S102 of FIG. 15). However, in the present embodiment, the reference indexes of time-blending candidates are derived without referring to a forecast block included in the same coding block as the coding block that includes the processing target forecast block. When the slice type Petition 870190021217, of 03/01/2019, p. 90/536 87/150 slice_type is slice Pea Interpretation is performed using the Interpretation Information of Time Merge Candidates, only the L0 reference indexes are derived, since only the L0 forecast (Pred_L0) is performed. When the slice type slice_type is slice B and the interpretation is performed using the time-merge candidate interpretation information, the reference indexes L0 and L1 are derived once the double forecast (Pred_BI) is performed. The detailed process flow of step S102 will be described in detail later with reference to the flowchart of FIG. 18. [00160] Subsequently, the time merge candidate derivation unit 133 of the interpretation information derivation unit 104 of the moving image encoding device, and the time merge candidate derivation unit 233 of the derivation unit of Interprevision information 205 from the moving image decoding device derives time merge candidates from images of different time and adds the time merge candidates derived to the mergeCandList merge candidate list (step S103 of FIG. 15). An availableFlagCol flag indicating whether time merge candidates can be used, an L0 predFlagL0Col forecast flag indicating whether L0 forecasting is performed, a L1 predFlagL1Col forecast flag indicating whether L1 forecasting is performed, a L0 motion vector mvL0N and a motion vector L1 mvL1N are derived. The detailed process flow of step S103 will be described in detail later with reference to the flowchart of FIG. 19. The mergeCandList merge candidate list is provided for the additional merge candidate derivation unit 134 of the interpreter information derivation unit 104 of the moving image encoding device and the motion unit. Petition 870190021217, of 03/01/2019, p. 91/536 88/150 additional merge candidate derivation 234 of the interpretation information derivation unit 205 of the moving image decoding device. [00161] Subsequently, the additional merge candidate derivation unit 134 of the interpreter information derivation unit 104 of the moving image encoding device, and the additional merge candidate derivation unit 234 of the derivation unit of Interpretation information 205 from the moving image decoding device derives additional merge candidates using the largest number of merge candidates maxNumMergeCand as an upper limit, when the number of merge candidates numMergeCand added to the mergeCandList merge candidate list is less than the largest number of merge candidates maxNumMergeCand and add additional merge candidates derived to the mergeCandList merge candidate list (step S104 in FIG. 15). Using the largest number of merge candidates maxNumMergeCand as an upper limit, for slices P, merge candidates that have different reference indexes and of which the motion vector has the value (0, 0) and the mode of forecast is the forecast L0 (Pred_L0) are added. For slices B, merge candidates that have different reference indexes and of which the motion vector has the value (0, 0) and the forecast mode is double forecast (Pred_BI) are added. The detailed process flow of step S104 will be described in detail later with reference to the flowchart of FIG. 26. For slices B, merge candidates that have been added, and from which the combinations of forecast L0 and forecast L1 are changed and the forecast mode is double forecast (Pred_BI), can be derived and added. The list of candidates for Petition 870190021217, of 03/01/2019, p. 92/536 89/150 mergeCandList merge is supplied for merge candidate limitation unit 136 of interpreter information derivation unit 104 of the moving image encoding device and merge candidate limitation unit 236 of derivation unit of interpretation information 205 from the moving image decoding device. [00162] Subsequently, the merge candidate limitation unit 136 of the interpreter information derivation unit 104 of the moving image encoding device, and the merge candidate limitation unit 236 of the information derivation unit of Interpretation 205 of the moving image decoding device limits the value of the number of merge candidates numMergeCand added to the list of merge candidates mergeCandList to the largest number of merge candidates maxNumMergeCand, when the value of the number of merge candidates numMergeCand added to the list mergeCandList merge candidates is greater than the largest number of maxNumMergeCand merge candidates (step S106 in FIG. 15). The mergeCandList merge candidate list is provided for the interpreter information selection unit 137 of the interpreter information derivation unit 104 of the moving image encoding device, and the interpreter information selection unit 237 of the unit of derivation of interpretation information 205 from the moving image decoding device. The detailed process flow of step S106 will be described with reference to the flowchart of FIG. 27. [00163] When the value of the number of merge candidates numMergeCand added to the mergeCandList merge candidate list is greater than the largest number of merge candidates maxNumMergeCand (step S7101 in FIG. 27: YES), the value Petition 870190021217, of 03/01/2019, p. 93/536 90/150 of the number of merge candidates numMergeCand is updated to the largest number of merge candidates maxNumMergeCand (step S7102 of FIG. 27). The process in step S7102 refers to inhibiting access to all merge candidates whose merge index in the mergeCandList merge candidate list is greater than (maxNumMergeCand - 1) and limiting the number of merge candidates added to the candidate list mergeCandList merge to the largest number of maxNumMergeCand merge candidates. [00164] In the present modality, the number of merge candidates added to the mergeCandList merge candidate list is adjusted to a fixed number on the respective slices. The reason why the number of merge candidates added to the mergeCandList merge candidate list is fixed is as follows. If the number of merge candidates added to the mergeCandList merge candidate list changes depending on the state of the built merge candidate list, the entropy decoding depends on the built merge candidate list. Thus, the decoder cannot decode merge indices by entropy decoding, except when a list of merge candidates is constructed for the respective forecast blocks and the number of merge candidates added to the mergeCandList merge candidate list is derived. As a result, decoding of merge indices is delayed and entropy decoding becomes complex. Additionally, if entropy decoding depends on the state of a list of constructed merge candidates that includes Col merge candidates derived from different weather forecast blocks, when an error occurs during the decoding of a a different image, a stream Petition 870190021217, of 03/01/2019, p. 94/536 91/150 bits of the current image is also influenced by the error. Therefore, it is not possible to derive the number of merge candidates added to a list of normal merge candidates mergeCandList and to continue entropy decoding accordingly. As in the present modality, when the number of merge candidates added to the mergeCandList merge candidate list is set to a fixed value for the respective slices, it is not necessary to derive the number of merge candidates added to the mergeCandList merge candidate list in respective forecast blocks and it is possible to decode the merge indices by entropy decoding, regardless of the construction of the merge candidate list. In addition, even if an error occurs during the decoding of a bit stream from another image, it is possible to continue the entropy decoding of a bit stream from the current image without being influenced by the error. In this modality, a syntax element that indicates the number of merge candidates added to the mergeCandList merge candidate list is coded for the respective slices, and the number of merge candidates added to the mergeCandList is defined as the largest number of merge candidates. maxNumMergeCand merge. [00165] Subsequently, a method for deriving merge candidates N from prediction blocks N next to a target coding / decoding block, which is the process of step S101 of FIG. 15, will be described in detail. FIG. 16 is a flow chart for describing the flow of a spatial merge candidate derivation process of step S101 of FIG. 15. N is a variable A (left), B (top), C (top right), D (bottom left) or E (top left) that indicates the region of a neighboring forecast block. In the present modality, four spatial blending candidates in the Petition 870190021217, of 03/01/2019, p. 95/536 Maximum 92/150 are derived from five neighboring forecast blocks. [00166] In FIG. 16, the coding information of a forecast block A near the left side of a coding / decoding target forecast block, using the variable N set to A, is investigated to derive a merge candidate A, the information encoding of a forecast block B near the left side of an encoding / decoding target forecast block, using the variable N set to B, are investigated to derive a merge candidate B, the encoding information of a forecast block C near the left side of a coding / decoding target forecast block, using the variable N set to C, are investigated to derive a merge candidate C, the coding information of a forecast block D near the left side of an encoding / decoding target prediction block, using the variable N set to D, are investigated to derive u m merge candidate D, and the encoding information from a prediction block E near the left side of an encoding / decoding target forecast block, using the variable N set to E, is investigated to derive a candidate from merge E. Derived merge candidates are added to the list of merge candidates (steps S1101 to S1118 of FIG. 16). [00167] First, when the variable N is E and the sum of the availableFlagA, availableFlagB, availableFlagC and availableFlagD flags is 4 (step S1102 of FIG. 16: YES) (that is, four spatial merge candidates are derived), merge candidate E's availableFlagE flag is set to 0 (step 51107 of FIG. 16), both the values of the motion vectors mvL0E and mvL1E of merge candidate E are set to (0, 0) (step 51108 of FIG. 16), both values of the flags predFlagL0E and Petition 870190021217, of 03/01/2019, p. 96/536 93/150 predFlagLIE of merge candidate E are set to 0 (step S1109 of FIG. 16). After that, the flow continues to step S1118 and the spatial merge candidate derivation process ends. [00168] In the present modality, since a maximum of four merge candidates are derived from the neighboring forecast blocks, when four spatial merge candidates have already been derived, it is not necessary to perform the merge candidate derivation process yet space. [00169] On the other hand, when the variable N is not E or the sum of the values of the availableFlagA, availableFlagB, availableFlagC and availableFlagD flags is not 4 (step S1102 of FIG. 16: NO) (that is, four spatial merge candidates are not derived), the flow proceeds to step S1103. When neighboring forecast block N is included in the same coding block as the coding block that includes the derivation target forecast block (step S1103 in FIG. 16; YES), the value of the availableFlagN flag of the merge candidate N is set to 0 (step S1107 of FIG. 16), both values of the motion vectors mvL0N and mvL1N of merge candidate N are set to (0, 0) (step S1108 of FIG. 16), both values of the flags predFlagL0N and predFlagL1N of merge candidate N are set to 0 (step S1109 of FIG. 16) and then the flow proceeds to step S1118. When neighboring forecast block N is included in the same coding block as the coding block that includes the derivation target forecast block (step S1103 in FIG. 16: YES), neighboring forecast block N is not mentioned, so that the prediction merge candidate derivation process and the merge candidate list building process can be performed in parallel. Petition 870190021217, of 03/01/2019, p. 97/536 94/150 [00170] Specifically, neighboring forecast block B, whose division mode (PartMode) is 2NxN division (PART_2NxN), 2NxnU division (PART_2NxnU) or 2NxnD division (PART_2NxnD) and the Target forecast block PartIdx of processing is 1, it is the case that the neighboring forecast block N is included in the same coding block as the coding block that includes the derivation target forecast block. In this case, since neighboring forecast block B is a forecasting block that has PartIdx 0, neighboring forecasting block B is not mentioned, so the prediction merge candidate derivation process and the process of building a list of merge candidates can be carried out in parallel. [00171] Additionally, neighboring forecast block A, whose division mode (PartMode) is the Nx2N division ((PART_Nx2N), nLx2N division (PART_nLx2N) or nRx2N division (PART_nRx2N) and the PartIdx of the processing target forecast block is 1, it is the case that neighboring forecast block N is included in the same coding block as the coding block that includes the derivation target forecasting block, in which case neighboring forecasting block A is the forecasting that has PartIdx 0, neighboring forecasting block A is not mentioned, so that the forecasting block merge candidate derivation process and the process of building merge candidate list can be performed in parallel. [00172] Additionally, when the division mode (PartMode) is the NxN division (PART_NxN) and the PartIdx of the processing target forecast block is 1, 2 or 3, the neighboring forecast block N can be included in the same block encoding block as the encoding block that includes the derivation target forecast block. [00173] On the other hand, when the neighboring forecast block N is not included in the same coding block that includes the processing target forecast block (step S1103 of FIG. 16: NO), the blocks Petition 870190021217, of 03/01/2019, p. 98/536 95/150 N prediction next to the specified encoding / decoding target prediction block, and when the respective N forecast blocks can be used, the encoding information of the N forecast blocks is acquired from the storage memory of encoding information 115 or 210 (step S1104 of FIG. 16). [00174] When neighboring forecast block N cannot be used (step S1105 of FIG. 16: NO) or PredMode forecast mode of forecast block N is intra-forecast (MODE_INTRA) (step S1106 of FIG. 16: NO ), the value of the availableFlagN flag of merge candidate N is set to 0 (step S1107 of FIG. 16), both values of the motion vectors mvL0N and mvL1N of merge candidate N are set to (0, 0) (step S1108 of FIG. 16), and both the values of the predFlagL0N and predFlagL1N flags of merge candidate N are set to 0 (step S1109). Then, the flow proceeds to step S1118. Here, specific examples of the case where the neighboring forecast block N cannot be used include a case in which the neighboring forecast block N is positioned outside an encoding / decoding target slice and a case in which a process of encoding / decoding is not complete, due to the fact that the neighboring forecast block N is later in the order of the encoding / decoding process. [00175] On the other hand, when the neighboring forecast block N is outside the same coding block as the bypass target forecasting block (step S1104 of FIG. 16: YES), the neighboring forecasting block N can be used (step S1105 of FIG. 16: YES), and PredMode forecast mode of forecast block N is not intra-forecast (MODE_INTRA) (step S1106 of FIG. 16: YES), the block forecast information forecasting numbers N are used as the merge candidate N interpretation data. The value Petition 870190021217, of 03/01/2019, p. 99/536 96/150 of the availableFlagN flag for merge candidate N (step S1110 in FIG. 16) is set to 1, the mvL0N and mvL1N motion vectors of merge candidate N are set to the same values as the mvL0N motion vectors [xN] [yN] and mvL1N [xN] [yN] of the prediction block N motion vectors (step S1111 of FIG. 16), the reference indices refIdxL0N and refIdxL1N of the merge candidate N are adjusted to the same values as the indices reference block refIdxL0 [xN] [yN] and refIdxL1 [xN] [yN] of prediction block N (step S1112 in FIG. 16), and the merge candidate N's predFlagL0N and predFlagL1N flags are set to the predFlagL0 [xN ] [yN] and predFlagL1 [xN] [yN] of forecast block N (step S1113 of FIG. 16). Here, xN and yN are indices that indicate the position of a pixel in the upper left corner of the forecast block N in the image. [00176] Subsequently, the merge candidate N's predFlagL0N and predFlagL1N flags, merge candidate N's refIdxL0N and refIdxL1N reference indices, and merge candidate N's mvL0N and mvL1N motion vectors are compared with those of merge candidates N have been derived (step S1114: FIG. 16). When the same merge candidate is not present (step S1115 in FIG. 16: YES), merge candidate N is added to the position where the merge index of the mergeCandList merge candidate list has the same value as numMergeCand (step S1116 of FIG. 16) and the number of merge candidates numMergeCand is increased by 1 (step S1117 of FIG. 16). On the other hand, when the same merge candidate is present (step S1115 in FIG. 16: NO), steps S1116 and S1117 are skipped and the flow proceeds to step S1118. [00177] The processes of steps S1102 to S1117 are performed repeatedly for N = A, B, C, D and E (steps S1101 to S1118 of FIG. Petition 870190021217, of 03/01/2019, p. 100/536 97/150 16). [00178] Next, a method for deriving the time merge candidate reference indexes from step S102 of FIG. 15 will be described in detail. The reference indexes L0 and L1 of the time merge candidates are derived. [00179] In the present modality, the reference indexes of the candidates for temporal merging are derived using the reference indexes of spatial merging candidates (that is, the reference indexes used in the forecast blocks next to the target coding block / decoding). This is due to the fact that when a time merge candidate is selected, the reference index of the prediction block of the coding / decoding target has a high correlation with the reference index of the forecast blocks next to the target coding block / decoding, which becomes the candidate for spatial blending. In particular, in the present modality, only the reference indexes of the forecast block A are used near the left side of the coding / decoding target forecast block. This is due to the fact that forecast blocks A and B next to the encoding / decoding target forecast block between neighboring forecast blocks A, B, C, D and E, which are also merge candidates spatial, have greater correlation than forecast blocks C, D and E near the corner of the target encoding / decoding target block. Since the forecast blocks C, D and E that have relatively low correlation are not used and the forecast blocks to be used are limited to the forecast block A, it is possible to improve the coding efficiency that results from the derivation of the reference indexes of time merge candidates and reduce the amount of processing and the amount of memory access associated with the process, to derive the reference indexes from Petition 870190021217, of 03/01/2019, p. 101/536 98/150 candidates for temporal merging. [00180] FIGS. 17A to 17H are diagrams that illustrate neighboring blocks mentioned in the process of deriving the time merge candidate reference index of the present modality. In this modality, refer to or the forecast block next to the left side of the derivation target forecast block is changed according to the PartIdx division index of the forecast block regardless of the division mode (PartMode) of a coding block . When the PartIdx split index of the forecast block is 0, the forecast block near the left side is mentioned. When the PartIdx split index is not 0, the neighboring forecast block is not mentioned, but a predefined value is used. When the PartIdx division index of the forecast block is 0, in any division mode (PartMode), the forecast block next to the left side does not always belong to the coding block. When the PartIdx division index of the forecast block is not 0, the forecast block next to the left side belongs to the coding block depending on the division mode (PartMode). When the split mode (PartMode) is 2Nx2N (PART_2Nx2N), as shown in FIG. 17A, a forecast block A0 near the left side of the derivation target forecast block is mentioned, and the LX reference index of the time merge candidate is adjusted to the value of the LX reference index of the forecast block A0. [00181] When the division mode (PartMode) to divide a processing target coding block into two forecast blocks arranged in the vertical direction is the 2NxN division (PART_2NxN), 2NxnU division (PART_2NxnU) and 2NxnD division (PART_2NxnD) and the split mode (PartMode) for dividing a processing target coding block into two forecast blocks arranged in the horizontal direction is the Nx2N division ((PART_Nx2N), nLx2N division Petition 870190021217, of 03/01/2019, p. 102/536 99/150 (PART_nLx2N) and nRx2N division (PART_nRx2N), as illustrated in FIGS. 17B, 17C, 17D, 17E, 17F and 17G, the forecast block A0 near the left side is mentioned in the forecast block whose PartIdx split index is 0, and the LX reference index of the time merge candidate is set to value of the reference index LX of the forecast block A0. The neighboring forecast block is not mentioned in the forecast block whose PartIdx split index of the drift target is 1, and the LX reference index of the time merge candidate is set to the default value 0. Since the forecast block The A0 to be mentioned does not belong to the coding block, the reference indices of the candidates for temporal merging of two forecast blocks, whose PartIdx division indices are 0 and 1, can be derived in parallel. [00182] When the division mode (PartMode) to divide a processing target coding block into four forecast blocks in the vertical and horizontal directions is the NxN division (PART_NxN), as illustrated in FIG. 17H, the forecast block A0 near the left side is mentioned in the forecast block, whose derivation target PartIdx split index is 0, the LX time merge candidate reference index is set to the value of the LX reference index forecast block A0. In forecast blocks whose PartIdx derivation target split indices are 1, 2 and 3, the neighboring forecast block is mentioned, and the LX reference index of the time merge candidate is set to the default value 0. Since the forecast block A0 to be mentioned does not belong to the coding block, the reference indexes of the candidates of time merging of four forecast blocks, whose PartIdx division indexes are 0, 1, 2 and 3, are derived in parallel. [00183] However, when neighboring forecast block A does not perform the LX forecast, the merge candidate LX refractive index value Petition 870190021217, of 03/01/2019, p. 103/536 100/150 temporal is set to the default value 0. The reason the predefined value of the LX reference index of the time merge candidate is set to 0, when neighboring forecast block A does not perform the LX forecast and the index of PartIdx division of the derivation target forecast block is 1, it is due to the fact that the reference image, whose reference index value in the interpretation is 0, is more likely to be selected. However, the present invention is not limited to this, the default value of the reference index can be a value (1, 2, or similar) other than 0, and a syntax element indicating the default value of the reference index can be provided in a bit stream at slice, image or sequence levels and be transmitted so that the preset value can be selected on the encoder side. [00184] FIG. 18 is a flow chart for describing the flow of a time merge candidate benchmark derivation process of step S102 of FIG. 15 in accordance with the present modality. First, when the PartIdx split index is 0 (step S2104: YES), the encoding information of forecast block A near the left side of the target target forecast block is acquired from the encoding information storage memory 115 or 210 (step S2111). [00185] The subsequent processes of steps S2113 to S2115 are carried out in the respective lists L0 and L1 (steps S2112 to S2116). LX is set to L0 when the time merge candidate's L0 reference index is derived and LX is set to L1 when the time merge candidate's L1 reference index is derived. However, when the slice type slice_type is slice P, since the interpretation mode includes L0 forecast (Pred_L0) only and does not include L1 forecast (Pred_L1) and double forecast (Pred_BI), processes associated with L1 can be omitted. Petition 870190021217, of 03/01/2019, p. 104/536 101/150 [00186] When the predFlagLX [xA] [yA] flag that indicates whether or not to execute forecast LX from forecast block A is not 0 (step S2113: YES), the merge candidate's reference index LX refIdxLXCol time is set to the same value as the reference index value LX refIdxLX [xA] [yA] of forecast block A (step S2114). Here, xA and yA are indices that indicate the position of a pixel in the upper left corner of forecast block A in the image. [00187] In the present modality, in the forecast block N (N = A, B), when the forecast block N is outside the encoding / decoding target slice and cannot be used, when the forecast block N is later to the encoding / decoding target forecast block in the encoding / decoding order and cannot be used if it is encoded / decoded, or when the PredMode forecast mode of the forecast block N is intrapredictive (MODE_INTRA), both the predFlagL0 [ xN] [yN] which indicates whether or not to use the L0 forecast as the predFlagL1 flag [xN] [yN] which indicates whether or not to use the L1 forecast of the forecast block N is 0. Here, xN and yN are indices that indicate the position of a pixel in the upper right corner of the forecast block N in the image. When the PredMode forecast mode of the forecast block N is forecast (MODE_INTER) and the forecast mode is the forecast L0 (Pred_L0), the predFlagL0 [xN] [yN] flag that indicates whether or not to use the forecast L0 of the forecast block. prediction N is 1 and the predFlagL1 [xN] [yN] flag that indicates whether or not to use L1 forecast is 0. When the predictive mode of forecast block N is forecast L1 (Pred_L1), the predFlagL0 [xN] flag [ yN] which indicates whether or not to use the L0 forecast of the forecast block N is 0 and the predFlagL1 flag [xN] [yN] which indicates whether to use the L1 forecast is 1. When the forecast mode of the forecast block N is the double forecast (Pred_BI), both the predFlagL0 [xN] [yN] flag that indicates whether or not to use the forecast Petition 870190021217, of 03/01/2019, p. 105/536 102/150 L0 of the forecast block N as the flag predFlagL1 [xN] [yN] which indicates whether or not to use the forecast L1 is 1. [00188] When the predFlagLX [xA] [yA] flag that indicates whether or not to execute forecast LX of forecast block A is 0 (step S2113: NO), the LX reference index refIdxLXCol of the time merge candidate is set to the default value 0 (step S2115). [00189] The processes of steps S2113 to S2115 are performed for each of L0 and L1 (steps S2112 to S2116), and the reference index derivation process ends. [00190] On the other hand, when the PartIdx split index is not 0 (step S2104: NO), the subsequent process of step S2121 is performed for each of L0 and L1 (steps S2118 to S2122). LX is set to L0 when the time merge candidate's L0 reference index is to be derived, and LX is set to L1 when the L1 reference index is to be derived. However, when the slice type slice_type is slice P, since the interpretation mode includes L0 forecast (Pred_L0) only and does not include L1 forecast (Pred_L1) and double forecast (Pred_BI), the processes associated with L1 can be omitted. [00191] The reference index LX refIdxLXCol of the time merge candidate is set to the default value 0 (step S2121). [00192] The processes up to step S2121 are performed for each of L0 and L1 (steps S2118 to S2122), and the reference index derivation process ends. [00193] In the present modality, although referring or not to the forecast block next to the left side of the derivation target forecast block is switched, referring or not to the forecast block next to the upper side can be switched instead of the block forecast Petition 870190021217, of 03/01/2019, p. 106/536 103/150 near the left side. [00194] Next, a method for deriving different time merge candidates in step S103 of FIG. 15 will be described in detail. FIG. 19 is a flow chart for describing the flow of a time merge candidate derivation process from step S103 of FIG. 15. [00195] Firstly, a colPic image of different time is derived with the use of a collocated_from_l0_flag flag that indicates whether the colPic image of different time is used, when deriving the type of slice slice_type described in the slice header on the respective slices, and a motion vector predictor candidate in a temporal direction, or a merge candidate uses a reference image added to any of the L0 reference list or the L1 reference list of an image, in which the target prediction block of processing is included (step S3101). [00196] FIG. 20 is a flow chart for describing the flow of a process for deriving the colPic image of different time in step S3101 of FIG. 19. When the slice type slice_type is slice B and the collocated_from_l0_flag flag is 0 (step S3201: YES, step S3202: YES), an image, whose RefPicList1 [0] (that is, the reference index of an L1 reference list) is 0, becomes the colPic image of different time (step S3203). In other cases, that is, when the slice type slice_type is slice B and the collocated_from_l0_flag flag is 1 (step S3201: YES, step S3202: NO), or when the slice type slice_type is slice P (step S3201: NO , step S3204: YES), an image, whose RefPicList0 [0] (that is, the reference index of the reference list L0) is 0, becomes the colPic image of different time (step S3205). [00197] Subsequently, the flow returns to the flowchart of FIG. 19, a different colPU forecast block of weather is derived and the Petition 870190021217, of 03/01/2019, p. 107/536 104/150 encoding information is acquired (step S3102). [00198] FIG. 21 is a flow chart for describing the flow of a process for deriving a colPU forecast block from the different time colPic image in step S3102 of FIG. 19. [00199] First, a forecast block positioned in the lower right corner (outside) of the same position as the processing target forecast block in the different time colPic image is set as the different time colPU forecast block (step S3301) . This forecast block corresponds to the forecast block T0 illustrated in FIG. 9. [00200] Subsequently, the encoding information of the colPU forecast block of different weather is acquired (step S3302). When the PredMode of the colPU forecast block of different weather cannot be used or the PredMode forecast mode of the colPU forecast block of different weather is intra-forecast (MODE_INTRA) (step S3303: YES, step S3304: YES), a forecast block positioned in the upper left corner of the center of the same position as the processing target forecast block in the different time colPic image is set as the different time colPU forecast block (step S3305). This forecast block corresponds to the forecast block T1 illustrated in FIG. 9. [00201] Then, the flow returns to the flowchart of FIG. 19, an availableFlagL0Col flag that indicates whether the motion vector predictor L0 mvL0Col and the time merge candidate Col, derived from the prediction block of a different image in the same position as the encoding / decoding target prediction block, are valid, derived (step S3103), and an availableFlagL1Col flag that indicates whether the motion vector predictor L1 mvL1Col and the time merge candidate Col are valid is derived (step S3104). Additionally, when the Petition 870190021217, of 03/01/2019, p. 108/536 105/150 availableFlagLOCol or the availableFlagLICol flag is 1, the availableFlagCol flag that indicates whether the time merge candidate Col is valid is set to 1. [00202] FIG. 22 is a flowchart for describing the process flow for deriving interpreting information from time-blended candidates in steps S3103 and S3104 of FIG. 19. The L0 or L1 time merge candidate derivation target list is referred to as LX and the forecast using LX is referred to as LX forecast. The same is true for the following description, except where otherwise noted below. LX is L0 when step S3103, which is the process of deriving the L0 list from the time merge candidate, is applied, and LX is L1 when step S3104, which is the process of deriving from the L1 list of the merge candidate is applied. [00203] When the PredMode forecast mode of the colPU forecast block of different weather is intra-forecast (MODE_INTRA) or cannot be used (step S3401: NO, step S3402: NO), it is assumed that the time merge candidates are not gifts. Both the availableFlagLXCol flag and the predFlagLXCol flag are set to 0 (step S3403), the motion vector mvLXCol is set to (0, 0) (step S3404), and the process for deriving interpreting information from time merge candidates ends. [00204] When the colPU forecast block can be used and the PredMode forecast mode is not intra-forecast (MODE_INTRA) (step S3401: YES, step S3402: YES), mvCol, refIdxCol and availableFlagCol are derived in the following flow. [00205] When the PredFlagL0 [xPCol] [yPCol] flag indicating whether colPU forecast L0 is used is 0 (step S3405: YES), since the prediction mode of the colPU forecast block is Pred_L1, the vector Petition 870190021217, of 03/01/2019, p. 109/536 106/150 mvCol motion is set to the same value as MvL1 [xPCol] [yPCol], which is the L1 motion vector of the colPU forecast block (step S3406), the refIdxCol reference index is set to the same value than the reference index L1 RefIdxL1 [xPCol] [yPCol] (step S3407), and the ListCol list is adjusted to L1 (step S3408). Here, xPCol and yPCol are indexes that indicate the position of a pixel in the upper left corner of the colPU forecast block in the different time colPic image. [00206] On the other hand, when the prediction flag L0 PredFlagL0 [xPCol] [yPCol] of the prediction block colPU is not 0 (step S3405 of FIG. 22: NO), it is determined whether the prediction flag L1 PredFlagL1 [xPCol ] [yPCol] of the colPU forecast block is 0. When the L1 PredFlagL1 forecast flag [xPCol] [yPCol] of the colPU forecast block is 0 (step S3409: YES), the motion vector mvCol is set to the same value than MvL0 [xPCol] [yPCol], which is the L0 motion vector of the colPU forecast block (step S3410), the refIdxCol reference index is set to the same value as the L0 reference index RefIdxL0 [xPCol] [yPCol ] (step S3411), and the ListCol list is set to L0 (step S3412). [00207] When both the L0 PredFlagL0 [xPCol] [yPCol] forecast flag from the colPU forecast block and the L1 PredFlagL1 [xPCol] [yPCol] forecast flag from the colPU forecast block are not 0 (step S3405: NO, step S3409: NO), since the colPU forecast block interpretation mode is the double forecast (Pred_BI), one of the two motion vectors L0 and L1 is selected (step S3413). [00208] FIG. 23 is a flowchart that illustrates the process flow to derive interpreting information from temporal merge candidates, when the interpreting mode of the colPU forecast block is the double forecast (Pred_BI). Petition 870190021217, of 03/01/2019, p. 110/536 107/150 [00209] First, it is determined whether the POCs of all images added to all reference lists are less than the POC of the current encoding / decoding target image (step S3501). When the POCs of all images added to all reference lists L0 and L1 of the colPU forecast block are less than the POC of the current encoding / decoding target image (step S3501: YES) and when LX is L0 (ie , the vector predictive candidates of the L0 motion vector of the target encoding / decoding image have been derived) (step S3502: YES), the interpreting information from the L0 list of the colPU forecast block is selected. In this case, when LX is L1 (that is, the vector predictive candidates of the L1 motion vector of the target encoding / decoding image have been derived) (step S3502: NO), the predictive information from the L1 list of the forecast block colPU are selected. On the other hand, when at least one of the POCs of the images added to all reference lists L0 and L1 of the colPU prediction block is greater than the POC of the current encoding / decoding target image (step S3501: NO) and when the collocated_from_l0_flag flag is 0 (step S3503: YES), the interpreter information from list L0 of the colPU forecast block is selected. In this case, when the collocated_from_l0_flag flag is 1 (step S3503: NO), the interpreter information from list L1 of the colPU forecast block is selected. [00210] When the interpreter information from the L0 list of the colPU forecast block is selected (step 3502: YES, step S3503: YES), the motion vector mvCol is set to the same value as MvL0 [xPCol] [yPCol] ( step S3504), the reference index refIdxCol is set to the same value as RefIdxL0 [xPCol] [yPCol] (step S3505), and the ListCol list is adjusted to L0 (step S3506). [00211] When the interpreter information from the L1 list of the block Petition 870190021217, of 03/01/2019, p. 111/536 108/150 colPU forecast is selected (step S3502: NO, step S3503: NO), the motion vector mvCol is set to the same value as MvL1 [xPCol] [yPCol] (step S3507), the reference index refIdxCol is set to the same value as RefIdxL1 [xPCol] [yPCol] (step S3508), and the ListCol list is set to L1 (step S3509). [00212] Again with reference to FIG. 22, if the interpretation information can be acquired from the colPU forecast block, both the availableFlagLXCol flag and the predFlagLXCol flag are set to 1 (step S3414). [00213] Subsequently, the motion vector mvCol is scaled to obtain the motion vector LX mvLXCol of the time merge candidate (step S3415). The flow of this motion vector scaling process will be described with reference to FIGS. 24 and 25. [00214] FIG. 24 is a flow chart illustrating the flow of the motion vector scaling process of step S3415 of FIG. 22. [00215] The POC of the reference image that corresponds to the refIdxCol reference index mentioned by the ListCol list of the colPU forecast block is subtracted from the POC of the different time colPic image to derive the image distance from the td image (step S3601). When the POC of the reference image mentioned in the ListCol list of the colPU forecast block is earlier in the display order of the different time colPic image, the image distance to the image td has a positive value. When the POC of the reference image mentioned by the ListCol list of the colPU forecast block is later in the order of display to the colPic image of different time, the image distance to the image td has a negative value. td = (ColPic Image POC of different time) - (Reference Image POC mentioned by the ColPU Prediction Block ListCol list) [00216] The POC of the reference image that corresponds to the index Petition 870190021217, of 03/01/2019, p. 112/536 109/150 of LX reference of the time merge candidate derived in step S102 of FIG. 15 is subtracted from the POC of the current encoding / decoding target image to derive the image distance to the tb image (step S3602). When the reference image mentioned by the LX list of the current encoding / decoding target image is earlier in the display order of the current encoding / decoding target image, the image distance to the tb image has a positive value. When the reference image mentioned by the LX list of the current encoding / decoding target image is later in the display order than the current encoding / decoding target image, the image distance to the tb image has a negative value. tb = (Image POC target for current encoding / decoding) - (Reference Image POC that corresponds to the Time Merge Candidate LX reference index) [00217] Subsequently, the image distances to the image td and tb are compared ( step S3603). When the image distances to the td and tb image are equal (step S3603: YES), the LX mvLXCol motion vector of the time merge candidate is set to the same value as the mvCol motion vector (step S3604), and the process scaling ends. mvLXCol = mvCol [00218] On the other hand, when the image distances to the image td and tb are not equal (step S3603: NO), mvCol is multiplied by a scaling factor tb / td, according to the following expression, for perform the scheduling process (step S3605) to obtain the scaled LX motion vector mvLXCol of the time merge candidate. mvLXCol = tb / td * mvCol [00219] FIG. 25 illustrates an example in which the scheduling process of step S3605 is performed accurately at Petition 870190021217, of 03/01/2019, p. 113/536 110/150 whole number. The processes of steps S3606 to S3608 of FIG. 25 correspond to the process of step S3605 of FIG. 24. [00220] First, similarly to the flow chart of FIG. 24, the image distance to image td and the image distance to image tb are derived (steps S3601 and S3602). [00221] Subsequently, the image distances to the image td and tb are compared (step S3603). When the image distances to the image td and tb are equal (step S3603: YES), similarly to the flowchart of FIG. 24, the LX mvLXCol motion vector of the time merge candidate is set to the same value as the mvCol motion vector (step S3604), and the scheduling process ends. mvLXCol = mvCol [00222] On the other hand, when the image distances to the image td and tb are not equal (step S3603: NO), a variable tx is derived according to the following expression (step S3606). tx = (16384 + Abs (td / 2)) / td [00223] Subsequently, a DistScaleFactor scaling factor is derived according to the following expression (step S3607). DistScaleFactor = (tb * tx + 32) >> 6 [00224] Subsequently, a scaled LX motion vector mvLXCol of the time merge candidate is obtained according to the following expression (step S3608). mvLXCol = ClipMv (Sign (DistScaleFactor * mvCol) * ((Abs (DistScaleFactor * mvCol) + 127) >> 8)) [00225] Subsequently, again with reference to the flow chart of FIG. 19, when a time merge candidate is present (step S3105: YES), the time merge candidate Petition 870190021217, of 03/01/2019, p. 114/536 111/150 is added to the position where the merge index of the mergeCandList merge candidate list has the same value as numMergeCand (step S3106), the number of merge candidates numMergeCand is increased by 1 (step S3107), and the process merge candidate derivation ends. On the other hand, when the time merge candidate is not present (step S3105: NO), steps S3106 and S3107 are skipped and the time merge candidate derivation process ends. [00226] Next, a method for deriving additional merge candidates, which is the process of step S104 of FIG. 15, performed by the additional merge candidate derivation unit 134 of FIG. 12 and the additional merge candidate derivation unit 234 of FIG. 13 will be described in detail. FIG. 26 is a flow chart for describing the flow of the additional merge candidate derivation process of step S104 of FIG. 15. [00227] In the additional merge candidate derivation process performed by the additional merge candidate derivation unit 134 of FIG. 12 and the additional merge candidate derivation unit 234 of FIG. 13, a plurality of merge candidates that have different values of interpretation information is derived and added to the list of merge candidates, in order to broaden the choices for merge candidates to improve coding efficiency. In particular, in the process of deriving the additional merge candidate of FIG. 26, the prediction mode and motion vector values are fixed, and a plurality of merge candidates that have different reference index values are derived and added to the merge candidate list (steps S5101 to S5119 in FIG. 26 ). [00228] First, when the type of slice is slice P (step Petition 870190021217, of 03/01/2019, p. 115/536 112/150 S5101 of FIG. 26: YES), the value of the number of reference indexes L0 is set to a variable numRefIdx that indicates the number of reference indexes (step S5102 of FIG. 26). On the other hand, when the slice type is not slice P (step S5101 in FIG. 26: NO) (that is, when the slice type is slice B), the smallest value among the number of reference indexes L0 and the number of reference indexes L1 is adjusted for the variable numRefIdx which indicates the number of reference indexes (step S5103 of FIG. 26). Subsequently, 0 is adjusted to the reference index i (step S5104 of FIG. 26). [00229] Subsequently, an additional merge candidate, whose value of the prediction mode motion vector corresponding to the slice type is (0, 0), is derived while changing the reference index ie added to the candidate list of merge (steps 55105 to S5119 of FIG. 26). [00230] First, when the number of merge candidates numMergeCand is less than the greater number of merge candidates maxNumMergeCand (step S5106 of FIG. 26: YES), the flow proceeds to step S5107. When the number of merge candidates numMergeCand is not less than the largest number of merge candidates maxNumMergeCand (step 55106 of FIG. 26: NO), the merge candidate derivation process ends. Subsequently, when the reference index i is less than the variable numRefIdx (step S5107 of FIG. 26: YES), the flow proceeds to step S5109. When the reference index i is not less than the variable numRefIdx (step S5107 of FIG. 26: NO), the process of deriving the additional merge candidate ends. [00231] Subsequently, when the slice type is slice P (step S5109 of FIG. 26: YES), (0, 0) is adjusted for the motion vectors mvL0Zero and mvL1Zero of the additional merge candidates (step Petition 870190021217, of 03/01/2019, p. 116/536 113/150 S5110 of FIG. 26), the value of the reference index ie -1 are adjusted for the reference indexes refIdxL0Zero and refIdxL1Zero of the additional merge candidates, respectively (step S5111 of FIG. 26), and 1 and 0 are adjusted for the flags predFlagL0Zero and predFlagL1Zero of the additional merge candidates, respectively (step S5112 of FIG. 26). Then, the flow proceeds to step S5116. [00232] On the other hand, when the slice type is not the P slice (step S5109 of FIG. 26: NO) (that is, when the slice type is slice B), (0, 0) is adjusted for the motion vectors mvL0Zero and mvL1Zero (step S5113 in FIG. 26), the reference index value i is set to the reference indexes refIdxL0Zero and refIdxL1Zero of the additional merge candidates (step S5114 of FIG. 26), and 1 is set for the predFlagL0Zero and predFlagL1Zero flags of the additional merge candidates (step S5115 of FIG. 26). Then, the flow proceeds to step S5116. [00233] Subsequently, the additional merge candidate is added to the position in which the merge index of the mergeCandList merge candidate list is indicated by the same value as numMergeCand (step S5116 in FIG. 26), and the number of merge candidates numMergeCand is increased by 1 (step S5117 of FIG. 26). Subsequently, the index i is increased by 1 (step S5118 of FIG. 26), and the flow proceeds to step S5119. [00234] The processes of steps S5106 to S5118 are performed repeatedly for the respective reference indexes i (steps S5105 to S5119 of FIG. 26). [00235] In FIG. 26, although the prediction mode and motion vector values are fixed and a plurality of merge candidates that have different reference index values are derived and added to the merge candidate list, one Petition 870190021217, of 03/01/2019, p. 117/536 114/150 plurality of merge candidates from different prediction modes can be derived and added to the merge candidate list, and merge candidates that have different motion vector values can be derived and added to the merge candidate list. When the motion vector value is changed, merge candidates can be added while changing the motion vector value in the order of (0, 0), (1.0), (-1.0), (0 , 1) and (0, -1), for example. [00236] Next, the interpretation information selection unit 137 of the interpretation information derivation unit 104 of the moving image coding device will be described. FIG. 37 is a flow chart for describing the process flow of the interpreter information selection unit 137 of the interpreter information derivation unit 104 of the moving image encoding device. In FIG. 12 of the first practical example, in the interpreter information selection unit 137 of the interpreter information derivation unit 104 of the moving image coding device, when the number of merge candidates numMergeCand is greater than 0 (step S8101 of FIG. 37: YES), a merge candidate is selected from among valid merge candidates that are added to the merge candidate list and whose merge index is within the range 0 to (numMergeCand - 1), the interpreter information, which include the predFlagL0 [xP] [yP] and predFlagL1 [xP] [yP] flags that indicate whether or not to use the L0 forecast and the L1 forecast of the respective forecast blocks of the selected merge candidate, the refIdxL0 reference indexes [xP ] [yP] and refIdxL1 [xP] [yP] and the motion vectors mvL0 [xP] [yP] and mvL1 [xP] [yP] are supplied for the motion compensated forecast unit 105, and the merge index for Petition 870190021217, of 03/01/2019, p. 118/536 115/150 identification of the selected merge candidate is provided for the forecast method determination unit 107 (step S8102 of FIG. 37). When the mergeIdx merge index value is less than the numMergeIdx merge candidate number value, the mergeIdx merge index indicates a valid merge candidate added to the mergeCandList merge candidate list. When the mergeIdx merge index value is greater than or equal to the number of numMergeIdx merge candidates, the mergeIdx merge index indicates an invalid merge candidate that is not added to the mergeCandList merge candidate list. By applying the rules described later next to the encoder, even when the mergeIdx merge index indicates an invalid merge candidate, it is possible to select a valid merge candidate. [00237] When merge candidates are selected, the same method as that used by the forecast method determination unit 107 can be used. The encoding information, a coding amount for a residual signal and a coding distortion between a preview image signal and an image signal are derived for the respective merge candidates and a merge candidate that has the least amount of encoding and coding distortion is determined. The merge_idx syntax element of the merge index, which is the encoding information for the merge mode, is entropy-encoded for the respective merge candidates to derive an encoding amount from the encoding information. Additionally, a residual forecast signal encoding amount obtained by encoding a residual forecast signal between a forecast image signal obtained by performing the motion compensation using the merge candidate interpretation information, Petition 870190021217, of 03/01/2019, p. 119/536 116/150 according to the same method as the motion compensated prediction unit 105, for the respective merge candidates and an image signal from an encoding target supplied from the image memory 101 is derived. A total occurrence coding amount obtained by adding a coding amount of the coding information (i.e., the merge index) and a coding amount of the residual forecast signal is derived and used as an assessment value. [00238] Furthermore, after such residual forecast signal is encoded, the residual forecast signal is decoded to evaluate a quantity of distortion, and a coding distortion is derived as a reason representing an error from a signal original image that results from encoding. The amount of total occurrence coding and coding distortion are compared for the respective merge candidates, so that the coding information that has the least amount of occurrence coding and coding distortion is determined. The merge index that corresponds to the given encoding information is encoded as a merge_idx flag represented by a second forecast block unit syntax pattern. [00239] The amount of occurrence coding derived in this document is preferably obtained by simulating the coding process, but can be obtained by approximation or estimation. [00240] On the other hand, when the number of merge candidates numMergeCand is 0 (step S8102 of FIG. 37: NO), the interpreter information that has the default value that corresponds to the predetermined type of slice is supplied for the unit motion-compensated forecast 105 (steps S8103 to Petition 870190021217, of 03/01/2019, p. 120/536 117/150 S8105). When the slice type is slice P (step S8103 of FIG. 37: YES), the default value of the interpreter information is adjusted in such a way that the L0 prediction (Pred_L0) is used (the values of the predFlagL0 [xP] flags) [yP] and predFlagL1 [xP] [yP] are 1 and 0, respectively), the reference index L0 is 0 (the reference index values refIdxL0 [xP] [yP] and refIdxL1 [xP] [yP] are 0 and -1, respectively), and the vector value L0 mvL0 [xP] [yP] is (0, 0) (step S8104 of FIG. 37). On the other hand, when the slice type is not slice P (step S8103: NO), (that is, the slice type is slice B), the default value of the interpreter information is adjusted in such a way that the mode of interpretation is the double prediction (Pred_BI) (both values of the flags predFlagL0 [xP] [yP] and predFlagL1 [xP] [yP] are 1), both reference indexes are 0 (both reference index values refIdxL0 [xP] [yP] and refIdxL1 [xP] [yP] are 0), and both the vector value L0 and L1 mvL0 [xP] [yP] and mvL1 [xP] [yP] are (0, 0) (step S8105). Regardless of the slice type, even when the slice type is slice B, the default value of the interpretation information can be adjusted in such a way that the prediction L0 (Pred_L0) is used (the values of the flags predFlagL0 [xP] [yP ] and predFlagL1 [xP] [yP] are 1 and 0, respectively), the reference index L0 is 0 (the values of the reference indexes refIdxL0 [xP] [yP] and refIdxL1 [xP] [yP] are 0 and - 1, respectively), and the vector value L0 mvL0 [xP] [yP] is (0, 0). [00241] Next, the interpreter information selection unit 237 of the interpreter information derivation unit 205 of the moving image decoding device will be described. FIG. 38 is a flow chart for describing the process flow of the interpreter information selection unit 237 of the interpreter information derivation unit 205 of the moving image decoding device. In FIG. 13 of the first practical example, when the number Petition 870190021217, of 03/01/2019, p. 121/536 118/150 of merge candidates numMergeCand is greater than 0 (step S9101 of FIG. 38: YES), the interpreter information selection unit 237 of the interpreter information derivation unit 205 of the moving image decoding device selects a merge candidate that matches the mergeIdx merge index supplied from the second bit stream decoder 202 among the merge candidates added to the mergeCandList merge candidate list, supplies the interpreter information, which includes the predFlagL0 flags [xP ] [yP] and predFlagL1 [xP] [yP] which indicate whether or not to use the L0 and L1 predictions of the selected merge candidate, the L0 and L1 reference indexes refIdxL0 [xP] [yP] and refIdxL1 [xP] [yP] and the motion vectors L0 and L1 mvL0 [xP] [yP] and mvL1 [xP] [yP] for the motion compensated forecasting unit 206, and stores them in the information storage memory of coding 210 (step S9102 of FIG. 38). [00242] When a merge index that indicates that an invalid merge candidate is encoded on the encoder side, an invalid merge candidate is selected on the decoder side. In this case, the interpretation is performed using invalid interpretation information and unexpected predictive signals can be obtained. In addition, the interpretation mode may have a value that does not conform to the standards and the reference index may indicate a reference image that is not present, so that an error can occur and the decoding may end abnormally. . [00243] Thus, in the first practical example of the present modality, when the mergeIdx merge index value supplied is greater than or equal to the number of merge candidates numMergeIdx, the value of the number of merge candidates Petition 870190021217, of 03/01/2019, p. 122/536 119/150 numMergeldx is set to the mergeldx merge index, and then the process is performed. When the value of the mergeIdx merge index supplied is greater than or equal to the number of merge candidates numMergeIdx, the mergeIdx merge index set on the encoder side indicates an invalid merge candidate that is not added to the mergeCandList merge candidate list. By clipping the mergeIdx merge index, it is possible to obtain a merge candidate that is added last to the mergeCandList merge candidate list. By defining the clipping process in the mergeIdx merge index, it is possible to prevent the decoder from selecting a merge candidate that is not added to the mergeCandList merge candidate list. [00244] Alternatively, when the mergeIdx merge index value supplied is greater than or equal to the number of merge candidates numMergeIdx, by adjusting the merge candidate's interpretation information to a predetermined value, it is possible to prevent a merge candidate invalid is selected. The merge candidate's predetermined interpretation information is adjusted such that the forecast mode is the L0 forecast, the reference index value is 0, and the motion vector value is (0, 0). In the case of B slices, the forecast mode can be adjusted for bi forecasting. [00245] On the other hand, when the number of merge candidates numMergeCand is 0 (step S9102 of FIG. 38: NO), the interpreter information that has the predefined value corresponding to the predetermined slice type is supplied for the unit of motion compensated prediction 206 and are stored in the coding information storage memory 210 (steps S9103 to S9105 of FIG. 38). When the slice type is slice P (step S9103 in FIG. 38: YES), the default value for Petition 870190021217, of 03/01/2019, p. 123/536 120/150 Interpretation information is adjusted in such a way that the prediction L0 (Pred_L0) is used (the values of the flags predFlagL0 [xP] [yP] and predFlagL1 [xP] [yP] are 1 and 0, respectively), the index reference L0 is 0 (the reference index values refIdxL0 [xP] [yP] and refIdxL1 [xP] [yP] are 0 and -1, respectively), and the vector value L0 mvL0 [xP] [yP] is (0, 0) (step S9104 of FIG. 38). On the other hand, when the slice type is not the P slice (step S9103 in FIG. 38: NO), (ie, the slice type is slice B), the default value of the interpreter information is adjusted accordingly. so that the interpretation mode is the double prediction (Pred_BI) (both values of the predFlagL0 [xP] [yP] and predFlagL1 [xP] [yP] flags are 1), both reference indexes are 0 (both values of the reference indexes refIdxL0 [xP] [yP] and refIdxL1 [xP] [yP] are 0) and both the vector value L0 and L1 mvL0 [xP] [yP] and mvL1 [xP] [yP] are (0, 0 ) (step S9105 of FIG. 38). Regardless of the slice type, even when the slice type is slice B, the default value of the interpretation information can be adjusted in such a way that the prediction L0 (Pred_L0) is used (the values of the flags predFlagL0 [xP] [yP ] and predFlagL1 [xP] [yP] are 1 and 0, respectively), the reference index L0 is 0 (the values of the reference indexes refIdxL0 [xP] [yP] and refIdxL1 [xP] [yP] are 0 and - 1, respectively) and the vector value L0 mvL0 [xP] [yP] is (0, 0). [00246] Next, a method of deriving interpretation information, according to a second practical example of the modality, will be described with reference to the drawings. FIG. 28 is a diagram illustrating a detailed configuration of the interpretation information derivation unit 104 of the moving image encoding device illustrated in FIG. 1, according to the second practical example of the modality. FIG. 29 is a diagram illustrating a detailed configuration of the interpretation information derivation unit 205 of the Petition 870190021217, of 03/01/2019, p. 124/536 121/150 motion picture decoding illustrated in FIG. 2, according to the second practical example of the modality. The interpretation information derivation unit 104 illustrated in FIG. 28 of the second practical example is different from the interpretation information derivation unit 104 illustrated in FIG. 12 of the first practical example, due to the fact that a valid merge candidate supplementation unit 135 is added. The interpretation information derivation unit 205 illustrated in FIG. 29 of the second practical example is different from the interpretation information derivation unit 205 illustrated in FIG. 13 of the first practical example, due to the fact that a valid merge candidate supplementation unit 235 is added. In the present modality, in the moving image encoding device and in the moving image decoding device, when the value of the largest number of maxNumMergeCand merge candidates is 0, the merge candidate derivation process and the process of building list of merge candidates in FIG. 30 can be omitted. [00247] FIG. 30 is a flow chart to describe the flow of a merge candidate derivation process and a merge candidate list building process, which are the common functions of the merge candidate list building unit 120 of the derivation unit interpreting information 104, the moving image encoding device, and the merge candidate list building unit 220 of the interpreting information derivation unit 205 of the moving image decoding device, according to the second practical example of the embodiment of the present invention. The flow chart of FIG. 30 of the second practical example is different from the flowchart of FIG. 15 of the first practical example, due to the fact that a candidate derivation process from Petition 870190021217, of 03/01/2019, p. 125/536 122/150 valid merge from step S105 is added. [00248] Similar to the first practical example, the merge candidate list building unit 130 of the interpretation information derivation unit 104 of the moving image encoding device, and the merge candidate list building unit. 230 of the interpreter information derivation unit 205 of the moving image decoding device creates the mergeCandList merge candidate list (step S100 of FIG. 30). The spatial merge candidate construction unit 131 of the interpreter information derivation unit 104 of the moving image encoding device, and the spatial merge candidate construction unit 231 of the interpreter information derivation unit 205 of the moving image. moving image decoding device derive spatial merge candidates A, B, C, D and E from the forecast blocks A, B, C, D and E next to the target coding / decoding block, from the information of encoding stored in the encoding information storage memory 115 of the moving image encoding device or in the encoding information storage memory 210 of the moving image decoding device, and add the derived spatial merge candidates to the candidate list mergeCandList merge (step S101 of FIG. 30). The time merge candidate reference index derivation unit 132 of the interpreter information derivation unit 104 of the moving image encoding device, and the time merge candidate reference index derivation unit 232 of the unit of derivation of interprevision information 205 of the moving image decoding device derives the reference indexes of the time merge candidates from Petition 870190021217, of 03/01/2019, p. 126/536 123/150 of the forecast blocks next to the target coding / decoding block and supply the derived reference indices for the time merge candidate derivation unit 133 of the interpretation information derivation unit 104 of the image encoding device in motion, and the time merge candidate derivation unit 233 of the interpretation information derivation unit 205 of the moving image decoding device (step S102 of FIG. 30). The Time Merge Candidate Derivation Unit 133 of the Interpretation Information Derivation Unit 104 of the Moving Image Coding Device, and the Time Merge Candidate Derivation Unit 233 of the Interpretation Information Derivation Unit 205 of the moving image decoding device derive time merge candidates from images of different time and add derived time merge candidates to the mergeCandList merge candidate list (step S103 of FIG. 30). The additional merge candidate derivation unit 134 of the interpretation information derivation unit 104 of the moving image encoding device, and the additional merge candidate derivation unit 234 of the interpretation information derivation unit 205 of the moving image decoding device derive additional merge candidates, using the number of numMergeCand merge candidates added to the mergeCandList merge candidate list as the largest number of maxNumMergeCand merge candidates, when the number of numMergeCand merge candidates added to the list of mergeCandList merge candidates is less than the largest number of maxNumMergeCand merge candidates, and add the Petition 870190021217, of 03/01/2019, p. 127/536 124/150 additional merge candidates derived from the mergeCandList merge candidate list (step S104 of FIG. 30). The above processes are the same as those in the first practical example. Subsequently, in the second practical example, the valid merge candidate supplementation unit 135 and the valid merge candidate supplementation unit 235 supplement a valid merge candidate to eliminate an invalid merge candidate, within a range where the index merge in the merge candidate list is indicated by a value from 0 to (maxNumMergeCand - 1) (step S105 in FIG. 30). By eliminating the invalid merge candidate, within a range where the merge index has a value of 0 to (maxNumMergeCand - 1), it is ensured that an invalid merge candidate is not selected on the decoder side and only one candidate merge merge is selected. [00249] A valid merge candidate derivation method, which is the process of step S105 of FIG. 30, performed by the valid merge candidate supplementation unit 135, illustrated in FIG. 28, and by the valid merge candidate supplementation unit 235, shown in FIG. 29, illustrated in FIG. 30 of the second practical example of the present embodiment, will be described in detail with reference to the flowchart of FIG. 31. FIG. 31 is a flow chart for describing the flow of the valid merge candidate derivation process of step S105 of FIG. 30, according to the second practical example of the present embodiment. [00250] In the valid merge candidate derivation process of FIG. 31 of the second practical example, a plurality of merge candidates that have the same interpretation information value is added to the list of merge candidates, for the purpose of adding a valid merge candidate to the list of Petition 870190021217, of 03/01/2019, p. 128/536 125/150 merge candidates with a simple process, until an invalid merge candidate is eliminated within a range where the merge index in the merge candidate list is indicated by a value from 0 to (maxNumMergeCand - 1). A valid merge candidate, whose interpretation mode motion vector value that corresponds to the slice type is (0, 0), is added to the merge candidate list (steps S6101 to S6113 in FIG. 31). [00251] First, when the number of merge candidates numMergeCand is less than the greater number of merge candidates maxNumMergeCand (step S6102 of FIG. 31: YES), the flow proceeds to step S6103. When the number of merge candidates numMergeCand is not less than the largest number of merge candidates maxNumMergeCand (step 56102 of FIG. 31: NO), the valid merge candidate derivation process ends. [00252] Subsequently, when the slice type is slice P (step 56103 of FIG. 31: YES), merge candidates, whose interpretation mode is the L0 prediction (Pred_L0), the reference index is 0 and the vector value is (0, 0), are used as the valid merge candidates. (0, 0) is adjusted for the motion vectors mvL0Zero and mvL1Zero of the valid merge candidates (step S6104 of FIG. 31), 0 and -1 are adjusted for the reference indexes refIdxL0Zero and refIdxL1Zero of the valid merge candidates, respectively (step S6105 of FIG. 31), and 1 and 0 are set for the predFlagL0Zero and predFlagL1Zero flags of the valid merge candidates, respectively (step S6106 of FIG. 31). Then, the flow proceeds to step S6110. [00253] On the other hand, when the slice type is not slice P (step S6103 in FIG. 31: NO) (that is, when the slice type is slice B), the merge candidates, whose mode of interpretation is the forecast Petition 870190021217, of 03/01/2019, p. 129/536 126/150 double (Pred_BI), both reference indexes are 0, both vector values are (0, 0), are used as the valid merge candidates. (0, 0) is adjusted for the motion vectors mvL0Zero and mvL1Zero of the valid merge candidates (step S6107 of FIG. 31), the index value i is adjusted for the refIdxL0Zero and refIdxL1Zero reference indices of the merge candidates valid (step S6108 of FIG. 31) and 1 is set for the predFlagL0Zero and predFlagL1Zero flags of the valid merge candidates (step S6109 of FIG. 31). Then, the flow proceeds to step S6110. [00254] Subsequently, the valid merge candidate is added to the position in which the merge index of the mergeCandList merge candidate list is indicated by the same value as numMergeCand (step S6110 in FIG. 31), and the number of merge candidates numMergeCand is increased by 1 (step S6112 in FIG. 31). Then, the flow proceeds to step S6113. [00255] The processes of steps S6102 to S6112 are performed repeatedly until the number of merge candidates numMergeCand reaches the largest number of merge candidates maxNumMergeCand (steps S6101 to S6113 of FIG. 31). With these processes, in the second practical example, invalid merge candidates are eliminated within a range where the merge index in the merge candidate list is indicated by the value from 0 to (maxNumMergeCand - 1). [00256] In FIG. 28 of the second practical example, when the number of merge candidates in numMergeCand is greater than 0 (step S8101 of FIG. 37: YES), similarly to the interpreter information selection unit 137 illustrated in FIG. 12 of the first practical example, the interpreter information selection unit 137 of the interpreter information derivation unit 104 of the Petition 870190021217, of 03/01/2019, p. 130/536 127/150 moving image encoding device selects merge candidates from the merge candidates added to the merge candidate list, supplies the interpreter information, which includes the predFlagL0 [xP] [yP] and predFlagL1 [xP] [flags] yP] that indicate whether or not to use the L0 forecast and the L1 forecast of the respective forecast blocks of the selected merge candidate, the refIdxL0 [xP] [yP] and refIdxL1 [xP] [yP] reference indices and the motion vectors mvL0 [xP] [yP] and mvL1 [xP] [yP], are supplied for the motion compensated forecast unit 105, and the merge index to identify the selected merge candidate is supplied for the method determination unit forecast 107 (step S8102 of FIG. 37). However, in the second practical example, invalid merge candidates are not present in the range where the merge index in the merge candidate list is indicated by the value from 0 to (maxNumMergeCand - 1) and all merge candidates are candidates for merge. When the number of merge candidates numMergeCand is 0 (step S8102: NO), the interpreter information that has the default value that corresponds to the predetermined slice type is supplied for the motion compensated forecast unit 105 (steps S8103 to S8105) . [00257] On the other hand, in FIG. 29 of the second practical example, when the number of merge candidates in numMergeCand is greater than 0 (step S9101 of FIG. 38: YES), similarly to the interpreter information selection unit 237 illustrated in FIG. 13 of the first practical example, the interpreter information selection unit 237 of the interpreter information derivation unit 205 of the moving image decoding device selects a merge candidate that matches the mergeIdx merge index supplied from the second Petition 870190021217, of 03/01/2019, p. 131/536 128/150 bit stream decoder 202 among merge candidates added to the mergeCandList merge candidate list, supplies the interpreter information, which includes the predFlagL0 [xP] [yP] and predFlagL1 [xP] [yP] flags that indicate whether or not to use the L0 and L1 predictions of the selected merge candidate, the L0 and L1 reference indexes refIdxL0 [xP] [yP] and refIdxL1 [xP] [yP] and the motion vectors L0 and L1 mvL0 [xP ] [yP] and mvL1 [xP] [yP], for the motion compensated forecast unit 206, and stores them in the 210 coding information storage memory. However, in the second practical example, invalid merge candidates are not they are present in the range where the merge index in the list of merge candidates is indicated by the value from 0 to (maxNumMergeCand - 1) and all merge candidates are valid merge candidates. On the other hand, when the number of merge candidates numMergeCand is 0 (step S9102 in FIG. 38: NO), the interpreter information that has the default value that corresponds to the predetermined slice type is supplied to the forecast unit offset by movement 206 and are stored in the coding information storage memory 210 (steps S9103 to S9105 of FIG. 38). [00258] Next, a method of deriving interpretation information will be described, according to a third practical example of the present modality. FIG. 28 is also a diagram illustrating a detailed configuration of the interpretation information derivation unit 104 of the moving image encoding device illustrated in FIG. 1, according to the third practical example of the modality. FIG. 29 is also a diagram illustrating a detailed configuration of the interpretation information derivation unit 205 of the moving image decoding device illustrated in FIG. 2, according to the third Petition 870190021217, of 03/01/2019, p. 132/536 129/150 practical example of the modality. FIG. 30 is also a flow chart to describe the flow of a merge candidate derivation process and a merge candidate list building process, which are the common functions of the merge candidate list building unit 120 of the derivation of interpreter information 104, of the moving image encoding device, and of the merge candidate list building unit 220 of the interpreter information derivation unit 205 of the moving image decoding device, in accordance with third practical example of the embodiment of the present invention. In the third practical example, similarly to the second practical example, the valid merge candidate supplementation unit 135, illustrated in FIG. 28, and the valid merge candidate supplementation unit 235, shown in FIG. 29, supplement valid merge candidates to eliminate invalid merge candidates within the range where the merge index in the merge candidate list is indicated by the value from 0 to (maxNumMergeCand - 1) (step S105 in FIG. 30). By eliminating the invalid merge candidate, within a range where the merge index has a value of 0 to (maxNumMergeCand - 1), it is ensured that an invalid merge candidate is not selected on the decoder side and only one candidate merge option is selected. However, in the third practical example, regardless of the type of slice, the merge candidates, whose interpretation mode is the L0 prediction (Pred_L0), the reference index is 0 and the vector value is (0, 0), are used as the valid merge candidates. The merge candidate list building unit 120 of the interpretation information derivation unit 104 of the moving image encoding device of the second practical example Petition 870190021217, of 03/01/2019, p. 133/536 130/150 illustrated in FIG. 28 and the merge candidate list building unit 220 of the interpretation information derivation unit 205 of the moving image decoding device illustrated in FIG. 29 have the same configuration as those in the third practical example. However, the process of step S105 of FIG. 30 performed by the valid merge candidate supplementation unit 135 and by the valid merge candidate supplementation unit 235 is different from that of the second practical example. [00259] A valid merge candidate derivation method, which is the process of step S105 of FIG. 30 performed by the valid merge candidate supplementation unit 135, illustrated in FIG. 28, and by the valid merge candidate supplementation unit 235, shown in FIG. 29, the third practical example of the present embodiment will be described in detail with reference to the flowchart of FIG. 32. FIG. 32 is a flow chart for describing the flow of the valid merge candidate derivation process of step S105 of FIG. 30, according to the third practical example of the present modality. [00260] In the valid merge candidate derivation process of FIG. 32 of the third practical example, similarly to the valid merge candidate derivation process of FIG. 31 of the second practical example, a plurality of merge candidates that have the same interpretation information value is added to the merge candidate list, for the purpose of adding a valid merge candidate to the merge candidate list with a simple process , until an invalid merge candidate is eliminated within a range where the merge index in the merge candidate list is indicated by a value from 0 to (maxNumMergeCand - 1). However, in the third practical example, regardless of the type of slice, adjusting the mode of Petition 870190021217, of 03/01/2019, p. 134/536 131/150 Interprevision for the L0 prediction (Pred_L0), a valid merge candidate, whose interpreter mode motion vector value that corresponds to the slice type is (0, 0), is added to the merge candidate list ( steps S6101 to S6113 of FIG. 32). [00261] First, when the number of merge candidates numMergeCand is less than the greater number of merge candidates maxNumMergeCand (step S6102 of FIG. 32: YES), the flow proceeds to step S6103. When the number of merge candidates numMergeCand is not less than the largest number of merge candidates maxNumMergeCand (step S6102 in FIG. 32: NO), the valid merge candidate derivation process ends. [00262] Subsequently, merge candidates, whose interpretation mode is the L0 prediction (Pred_L0), the reference index is 0 and the vector value is (0, 0), are used as the valid merge candidates. (0, 0) is adjusted for the motion vectors mvL0Zero and mvL1Zero of the valid merge candidates (step S6104 of FIG. 32), 0 and -1 are adjusted for the reference indexes refIdxL0Zero and refIdxL1Zero of the valid merge candidates, respectively (step S6105 of FIG. 32), and 1 and 0 are set for the predFlagL0Zero and predFlagL1Zero flags of the valid merge candidates, respectively (step S6106 of FIG. 32). [00263] Subsequently, the valid merge candidate is added to the position in which the merge index of the mergeCandList merge candidate list is indicated by the same value as numMergeCand (step S6110 in FIG. 32), and the number of merge candidates numMergeCand is increased by 1 (step S6112 in FIG. 32). Then, the flow proceeds to step S6113. [00264] The processes of steps S6102 to S6112 are performed repeatedly until the number of merge candidates Petition 870190021217, of 03/01/2019, p. 135/536 132/150 numMergeCand reaches the largest number of merge candidates maxNumMergeCand (steps S6101 to S6113 in FIG. 32). With these processes, in the third practical example, invalid merge candidates are eliminated within a range where the merge index in the merge candidate list is indicated by the value from 0 to (maxNumMergeCand - 1). [00265] Next, a method of deriving interpretation information will be described, according to a fourth practical example of the present modality. FIG. 28 is also a diagram illustrating a detailed configuration of the interpretation information derivation unit 104 of the moving image encoding device illustrated in FIG. 1, according to the fourth practical example of the modality. FIG. 29 is also a diagram illustrating a detailed configuration of the interpretation information derivation unit 205 of the moving image decoding device illustrated in FIG. 2, according to the fourth practical example of the modality. FIG. 30 is also a flow chart to describe the flow of a merge candidate derivation process and a merge candidate list building process, which are the common functions of the merge candidate list building unit 120 of the derivation of interpreter information 104, of the moving image encoding device, and of the merge candidate list building unit 220 of the interpreter information derivation unit 205 of the moving image decoding device, in accordance with fourth practical example of the embodiment of the present invention. In the fourth practical example, similarly to the second and third practical examples, the valid merge candidate supplementation unit 135, illustrated in FIG. 28, and the valid merge candidate supplementation unit 235, illustrated in Petition 870190021217, of 03/01/2019, p. 136/536 133/150 FIG. 29, supplement valid merge candidates to eliminate invalid merge candidates within the range where the merge index in the merge candidate list is indicated by the value from 0 to (maxNumMergeCand - 1) (step S105 in FIG. 30). By eliminating the invalid merge candidate, within a range where the merge index has a value of 0 to (maxNumMergeCand - 1), it is ensured that an invalid merge candidate is not selected on the decoder side and only one candidate merge merge is selected. However, in the fourth practical example, a merge candidate that was last added to the merge list is repeatedly added to the merge candidate list as a valid merge candidate. The merge candidate list building unit 120 of the interpretation information derivation unit 104 of the moving image encoding device, of the second and third practical example illustrated in FIG. 28, and the merge candidate list building unit 220 of the interpreter information derivation unit 205 of the moving image decoding device shown in FIG. 29, have the same configuration as that of the fourth practical example. However, the process of step S105 of FIG. 30 performed by the valid merge candidate supplementation unit 135 and by the valid merge candidate supplementation unit 235 is different from that of the second and third practical examples. [00266] A valid merge candidate derivation method, which is the process of step S105 of FIG. 30 performed by the valid merge candidate supplementation unit 135, illustrated in FIG. 28, and by the valid merge candidate supplementation unit 235, shown in FIG. 29, of the fourth practical example of the present modality, will be described in detail with reference to the flowchart of Petition 870190021217, of 03/01/2019, p. 137/536 134/150 FIG. 32. FIG. 33 is a flow chart for describing the flow of the valid merge candidate derivation process of step S105 of FIG. 30, according to the fourth practical example of the present modality. [00267] In the valid merge candidate derivation process of FIG. 33 of the fourth practical example, similar to the valid merge candidate derivation process of FIGS. 31 and 32 of the second and third practical examples, respectively, a plurality of merge candidates that have the same interpretation information value is added to the merge candidate list, for the purpose of adding a valid merge candidate to the candidate list merge with a simple process, until an invalid merge candidate is eliminated within a range where the merge index in the merge candidate list is indicated by a value from 0 to (maxNumMergeCand - 1). However, in the fourth practical example, a merge candidate last added to the merge list is repeatedly added to the merge candidate list as a valid merge candidate (steps S6101 to S6113 in FIG. 33). [00268] First, when the number of merge candidates numMergeCand is less than the greater number of merge candidates maxNumMergeCand (step S6102 of FIG. 33: YES), the flow proceeds to step S6111. When the number of merge candidates numMergeCand is not less than the largest number of merge candidates maxNumMergeCand (step S6102 in FIG. 33: NO), the valid merge candidate derivation process ends. [00269] Subsequently, the merge candidate last added to the merge list is repeatedly added to the merge candidate list as a valid merge candidate (step S6111 in FIG. 33). Specifically, a candidate for Petition 870190021217, of 03/01/2019, p. 138/536 135/150 merge, of which the interpretation mode, reference index and vector value are the same as those of the merge candidate added in the position that corresponds to the index value (numMergeIdx - 1) of the merge candidate list, is added at the position where the merge index of the mergeCandList merge candidate list is indicated by the same value as numMergeCand as the valid merge candidate. Subsequently, the number of merge candidates in numMergeCand is increased by 1 (step S6112 of FIG. 33) and the flow proceeds to step S6113. [00270] The processes of steps S6102 to S6112 are repeatedly performed until the number of merge candidates numMergeCand reaches the largest number of merge candidates maxNumMergeCand (steps S6101 to S6113 of FIG. 33). With these processes, in the fourth practical example, invalid merge candidates are eliminated within a range where the merge index in the merge candidate list is indicated by the value from 0 to (maxNumMergeCand - 1). [00271] Next, a method of deriving interpretation information will be described, according to a fifth practical example of the present modality. FIG. 28 is also a diagram illustrating a detailed configuration of the interpretation information derivation unit 104 of the moving image encoding device illustrated in FIG. 1, according to the fourth practical example of the modality. FIG. 29 is also a diagram illustrating a detailed configuration of the interpretation information derivation unit 205 of the moving image decoding device illustrated in FIG. 2, according to the fourth practical example of the modality. FIG. 30 is also a flow chart to describe the flow of a merge candidate derivation process and a Petition 870190021217, of 03/01/2019, p. 139/536 136/150 merge candidate list building process, which are the common functions of merge candidate list building unit 120 of interpreter information derivation unit 104, of the moving image encoding device, and of the merge candidate list building unit 220 of the interpretation information derivation unit 205 of the moving image decoding device, according to the fourth practical example of the embodiment of the present invention. In the fifth practical example, a combination of the additional merge candidate derivation process of FIG. 26 of the fourth practical example and the valid merge candidate derivation process of FIG. 33. [00272] A valid merge candidate and additional merge candidate derivation method from step S110, which is a combination of steps S104 and S105 of FIG. 30, performed by a valid merge candidate derivation block and additional merge candidate 121, which is a combination of the processes performed by the additional merge candidate derivation unit 134 and the valid merge candidate supplementation unit 135 of FIG. 28, of the fifth practical example, and a valid merge candidate derivation block and additional merge candidate 221, which is a combination of the processes performed by the additional merge candidate derivation unit 234 and the candidate merge supplementation unit of valid merge 235 of FIG. 29, will be described in detail. FIG. 34 is a flow chart for describing the flow of the valid merge candidate and additional merge candidate derivation process from step S110 of FIG. 30, according to the fifth practical example of the present modality. [00273] In the merge candidate derivation process Petition 870190021217, of 03/01/2019, p. 140/536 137/150 valid and additional merge candidate of FIG. 34, a plurality of merge candidates that have different values of interpretation information is derived and added to the merge candidate list, in order to broaden the choices for merge candidates to improve coding efficiency. Thereafter, a plurality of merge candidates that have the same interpretation information value is added to the merge candidate list, for the purpose of adding a valid merge candidate to the merge candidate list until an invalid merge candidate is eliminated within a range where the merge index in the list is indicated by a value from 0 to (maxNumMergeCand - 1) (steps S5101 to S5119 in FIG. 34). [00274] First, when the slice type is slice P (step S5101 of FIG. 34: YES), the value of the number of reference indexes L0 is adjusted for the variable numRefIdx that indicate the number of reference indexes (step S5102 of FIG. 34). On the other hand, when the slice type is not slice P (step S5101 in FIG. 34: NO), that is, when the slice type is slice B, the smallest value between the number of reference indexes L0 and the number of reference indexes L1 is adjusted for the variable numRefIdx which indicates the number of reference indexes (step S5103 of FIG. 34). Subsequently, 0 is adjusted to the reference index i (step S5104 of FIG. 34). [00275] Subsequently, an additional merge candidate, whose value of the motion vector of the forecast mode corresponding to the slice type is (0, 0), is derived, while the reference index i is changed, and added to the list of merge candidates (steps S5105 to S5119 of FIG. 34). [00276] First, when the number of merge candidates numMergeCand is less than the largest number of merge candidates maxNumMergeCand (step S5106 of FIG. Petition 870190021217, of 03/01/2019, p. 141/536 138/150 34: YES), the flow proceeds to step S5107. When the number of merge candidates numMergeCand is not less than the largest number of merge candidates maxNumMergeCand (step S5106 in FIG. 34: NO), the process of deriving additional merge candidate ends. [00277] Subsequently, when the reference index i is less than the variable numRefIdx (step S5107 of FIG. 34: YES), the flow proceeds to step S5109 and an additional merge candidate addition process is performed. When the reference index i is not less than the variable numRefIdx (step S5107 of FIG. 34: NO), (numRefIdx - 1) is adjusted to the reference index i (step 55108 of FIG. 34), the flow proceeds to step S5109, and a process for adding a valid merge candidate is performed. [00278] Subsequently, when the slice type is slice P (step 55109 of FIG. 34: YES), (0, 0) is adjusted for the mvL0Zero and mvL1Zero motion vectors of the additional merge candidates (step 55110 of FIG. 34), the value of the reference index ie -1 is adjusted for the reference indexes refIdxL0Zero and refIdxL1Zero of the additional merge candidates, respectively (step S5111 of FIG. 34), and 1 and 0 are adjusted for the flags predFlagL0Zero and predFlagL1Zero of the additional merge candidates, respectively (step S5112 of FIG. 34). Then, the flow proceeds to step S5116. [00279] On the other hand, when the slice type is not the P slice (step S5109 of FIG. 34: NO) (that is, when the slice type is slice B), (0, 0) is adjusted for the motion vectors mvL0Zero and mvL1Zero (step S5113 in FIG. 34), the reference index value i is set to the refIdxL0Zero and refIdxL1Zero reference indexes of the additional merge candidates (step S5114 in FIG. 34), and 1 is set to the predFlagL0Zero and predFlagL1Zero flags for Petition 870190021217, of 03/01/2019, p. 142/536 139/150 additional merge candidates (step S5115 of FIG. 34). Then, the flow proceeds to step S5116. [00280] Subsequently, the additional merge candidate is added to the position where the merge index of the mergeCandList merge candidate list is indicated by the same value as numMergeCand (step S5116 in FIG. 34), and the number of merge candidates numMergeCand is increased by 1 (step S5117 of FIG. 34). Subsequently, the index i is increased by 1 (step S5118 of FIG. 34) and the flow proceeds to step S5119. [00281] The processes of steps S5106 to S5118 are repeatedly performed for the respective reference indexes i (steps S5105 to S5119 of FIG. 34). With these processes, in the fifth practical example, invalid merge candidates are eliminated within a range where the merge index in the merge candidate list is indicated by the value from 0 to (maxNumMergeCand - 1). [00282] Next, a method of deriving interpretation information will be described, according to a sixth practical example of the present modality. FIG. 28 is also a diagram illustrating a detailed configuration of the interpretation information derivation unit 104 of the moving image encoding device illustrated in FIG. 1, according to the sixth practical example of the modality. FIG. 29 is also a diagram illustrating a detailed configuration of the interpretation information derivation unit 205 of the moving image decoding device illustrated in FIG. 2, according to the sixth practical example of the modality. FIG. 30 is also a flow chart to describe the flow of a merge candidate derivation process and a merge candidate list building process, which are the common functions of the merge candidate list building unit 120 of the derivation of information from Petition 870190021217, of 03/01/2019, p. 143/536 140/150 Interpretation 104, the moving image encoding device, and the merge candidate list building unit 220 of the Interprevision Information Deriving Unit 205, the moving image decoding device, in accordance with sixth practical example of the embodiment of the present invention. Although a method of deploying the sixth practical example is different from that of the second practical example, the same interpretation information can be obtained on the decoder side. In the sixth practical example, interpretation information within the range of all indexes in the merge candidate list or where the index is indicated by the value from 0 to (maxMergeCand - 1) is initialized to a predetermined value, and the process to derive and adding the respective merge candidates is performed. When the slice type is slice P, the merge candidate list building unit 130 of FIG. 28 and the merge candidate list building unit 230 of FIG. 29 initialize all elements in the merge candidate list by adjusting the interpretation mode for the L0 forecast (Pred_L0), the reference index to 0 and the vector value to (0, 0). When the slice type is not slice P (that is, the slice type is slice B), all elements in the merge candidate list are initialized by adjusting the interpretation mode for the double forecast (Pred_BI), both reference indexes to 0 and both vector values to (0, 0). Additionally, 0 is adjusted for the number of merge candidates in numMergeCand. [00283] Additionally, the valid merge candidate supplementation unit 135 of FIG. 29 and the valid merge candidate supplementation unit 235 of FIG. 30, according to the sixth practical example, make the initialized interpreter information valid, so that merge candidates are used as Petition 870190021217, of 03/01/2019, p. 144/536 141/150 valid merge candidate. A valid merge candidate derivation method, which is the process of step S105 of FIG. 30 performed by the valid merge candidate supplementation unit 135 illustrated in FIG. 28 and the valid merge candidate supplementation unit 235 illustrated in FIG. 29, of the sixth practical example of the present embodiment, will be described in detail with reference to the flowchart of FIG. 35. FIG. 35 is a flow chart for describing the flow of the process of making the initialized interpreter information valid as valid merge candidates of step S105 of FIG. 30, according to the sixth practical example of the present modality. When the number of merge candidates numMergeCand is less than the greater number of merge candidates maxNumMergeCand (step S6201 of FIG. 35), the value of the greater number of merge candidates maxNumMergeCand is adjusted to the number of merge candidates numMergeCand ( step S6201 of FIG. 35). With this process, the merge candidate list building unit 130 of FIG. 29 and the merge candidate list building unit 230 of FIG. 30 make the initialized interpreter information valid, so that merge candidates are used as valid merge candidates. [00284] Next, a method of deriving interpretation information will be described, according to a seventh practical example of the present modality. FIG. 28 is also a diagram illustrating a detailed configuration of the interpretation information derivation unit 104 of the moving image encoding device illustrated in FIG. 1, according to the seventh practical example of the modality. FIG. 29 is also a diagram illustrating a detailed configuration of the interpreter information derivation unit 205 of the image decoding device in Petition 870190021217, of 03/01/2019, p. 145/536 142/150 movement illustrated in FIG. 2, according to the seventh practical example of the modality. FIG. 30 is also a flow chart to describe the flow of a merge candidate derivation process and a merge candidate list building process, which are the common functions of the merge candidate list building unit 120 of the derivation of interpreter information 104, of the moving image encoding device, and of the merge candidate list building unit 220 of the interpreter information derivation unit 205 of the moving image decoding device, in accordance with the seventh practical example of the embodiment of the present invention. Although a method of deploying the seventh practical example is different from that of the third practical example, the same interpretation information can be obtained on the decoder side. In the seventh practical example, similarly to the sixth practical example, the interpretation information, within the range of all indexes in the merge candidate list or where the index is indicated by the value from 0 to (maxMergeCand - 1), is initialized to a predetermined value, and the process to derive and add the respective merge candidates is carried out. However, in the seventh practical example, the merge candidate list building unit 130 of FIG. 28 and the merge candidate list building unit 230 of FIG. 29 initialize all elements in the merge candidate list by adjusting the interpretation mode for the L0 forecast (Pred_L0), the reference index to 0 and the vector value to (0, 0) regardless of the slice type. The other processes are the same as those in the sixth practical example. [00285] Previously in the present document, the present modality has been described. When a merge index that indicates an invalid merge candidate is encoded on the side of Petition 870190021217, of 03/01/2019, p. 146/536 143/150 encoder, an invalid merge candidate is selected on the decoder side. In this case, the interpretation is performed using invalid interpretation information and unexpected predictive signals can be obtained. In addition, the interpretation mode may have a value that does not conform to the standards and the reference index may indicate a reference image that is not present, so that an error can occur and the decoding may end abnormally. . [00286] According to the first practical example of the present modality, even when a merge index indicating an invalid merge candidate is encoded on the encoder side, the interpretation using the invalid merge candidate's interpretation information will not be performed on the decoder side. Since the moving image encoding device, according to the rules of the present modality, can obtain the same interpretation information and the same forecast signal, it is possible to obtain the same decoded image. [00287] According to the second to seventh practical example of the present modality, the merge index that indicates an invalid merge candidate will not be selected and encoded on the encoder side, and it is ensured that the interpretation with the use of the interpretation information of the invalid merge candidate is not performed on the decoder side. Since the moving image decoding device can obtain the same interpretation information and the same forecast signal, it is possible to obtain the same decoded image. [00288] According to the second to fifth practical examples of the present embodiment, the valid merge candidate supplementation unit 135 of the moving image encoding device and the merge candidate supplementation unit Petition 870190021217, of 03/01/2019, p. 147/536 144/150 valid 235 of the motion picture decoding device adds valid merge candidates until an invalid merge candidate is eliminated within the range where the merge index in the merge candidate list is indicated by the value from 0 to ( maxNumMergeCand - 1). However, valid merge candidates can be added up to a predetermined range of (maxNumMergeCand - 1) or more, as long as the invalid merge candidate is not present at least in the range of 0 to (maxNumMergeCand - 1). [00289] According to the sixth and seventh practical examples of the present embodiment, the merge candidate list building unit 120 of the interpretation information derivation unit 104, the moving image encoding device, and the motion unit construction of list of merge candidates 220 of the interpretation information derivation unit 205, of the moving image decoding device, initialize the interpretation information in the range in which the merge index in the list of merge candidates is indicated by the value from 0 to (maxNumMergeCand - 1) to the predetermined value. However, the interpreter information at least in the range 0 to (maxNumMergeCand - 1) can be initialized and the interpreter information to a predetermined range of (maxNumMergeCand 1) or more can be initialized. [00290] In the modality described above, the spatial merge candidate, the temporal merge candidate and the additional merge candidate are derived. However, a modality, in which the respective merge candidate derivation processes are omitted, is also included in the present invention. In addition, a modality, in which the respective merge candidate derivation processes are modified or a new Petition 870190021217, of 03/01/2019, p. 148/536 145/150 merge candidate derivation is added, it is also included in the present invention. [00291] When the additional merge candidate derivation process of FIG. 26, described in the present modality, is performed, if the type of slice is slice B, the method of the third and seventh practical example, in which a valid prediction merge candidate L0 that has a different value than the candidate's interpretation information additional merge candidate is supplemented, it is more suitable than the method of the second and sixth practical examples, in which a valid merge candidate that has the same interpretation information value as the additional merge candidate is supplemented. When the additional merge candidate derivation process of FIG. 26, described in the present modality, is not performed, if the type of slice is slice B, the method of the second and sixth practical examples, in which a valid merge candidate of double prediction that has high forecasting efficiency is supplemented, is more suitable than the method of the third and seventh practical examples, in which a valid L0 prediction merge candidate is supplemented. [00292] When the value for the largest number of merge candidates maxNumMergeCand is 0, the interpreter information that has the default value is not set, and the forward mode and the merge mode are inhibited, although the forward mode flags and blending mode are transmitted, the coding efficiency decreases due to the fact that the drive mode or blending mode cannot be selected. In addition, when the drive mode or drive mode that is inhibited on the encoder side is selected and an encoded bit stream is decoded, an error occurs on the decoder side and the decoding process may end abnormally. Petition 870190021217, of 03/01/2019, p. 149/536 146/150 [00293] However, in the present modality, regardless of the value of the largest number of merge candidates maxNumMergeCand, the merge mode, which includes the forward mode, can always be selected when the value of the largest number of merge candidates maxNumMergeCand is 0. In this case, in the drive mode or moving range, the interpreter information that has the default value is output. As an example of the default value of the interpretation information, when the slice type is slice B, the default value is set in such a way that the forecast mode is double forecast (Pred_BI), the reference image index value is 0 and the motion vector value is (0, 0). This way, even when the value of the largest number of maxNumMergeCand merge candidates is 0, it is ensured that encoding is not performed on the encoder side with the forward mode or the merge mode as an invalid value and that the interpretation with the use of interpreter information that has the predetermined default value is performed on the decoder side. In this way, since the moving image decoding device can obtain the same interpretation information and the same forecast signal, it is possible to obtain the same decoded image. Additionally, even when the value for the largest number of maxNumMergeCand merge candidates is 0, since the drive mode or merge mode can be selected, the coding efficiency is improved, as compared to when the drive mode or the blending mode is inhibited. [00294] When the value of the largest number of maxNumMergeCand merge candidates is 0, the merge mode interpreter information, which includes the forward mode, uses the default value, it is not necessary to carry out the candidate list construction process merge, except in the case where the value of the number Petition 870190021217, of 03/01/2019, p. 150/536 147/150 more merge candidates maxNumMergeCand is greater than or equal to 1. In this way, it is possible to produce an encoding that does not perform the process of building the merge candidate list and has a small amount of processing. In addition, since the process on the decoder side involves adjusting the default value to the merge mode interpreter information, which includes the drive mode only, it is possible to minimize the amount of processing on the decoder side and handle a device decoding capabilities capable of suppressing a decrease in coding efficiency. [00295] The bitstream of the moving image output by the moving image encoding device, according to the modality, has a specific data format that can be decoded according to an encoding method used in the modality, and the moving image decoding device that corresponds to the moving image encoding device can decode the bit stream that has the specific data format. [00296] When a wireless or wired network is used to exchange the bit stream between the moving image encoding device and the moving image decoding device, the bit stream can be converted to have a data suitable for a way of transmitting a communication path and then transmitted. In that case, a moving image transmitter is provided, which converts the bit stream emitted by the moving image encoding device into encoding data, which has the appropriate data format for a way of transmitting a transmission path and then transmits the encoding data to the network, and a moving image receiver that receives the encoding data from the Petition 870190021217, of 03/01/2019, p. 151/536 148/150 network, reconstructs the bit stream and supplies the reconstructed bit stream to the moving image decoding device. [00297] The moving image transmitter includes a memory that temporarily stores the bit stream emitted by the moving image encoding device, a packet processing unit that carries the bit stream in packet, and a transmission unit that transmits packet encoding data over the network. The moving image receiver includes a receiving unit that receives packet encoding data over the network, a memory that temporarily stores received encoding data, and a packet processing unit that performs packet processing on the encoding to build the bit stream and provides the bit stream built into the moving image decoding device. [00298] The process related to encoding and decoding described above can be implemented as transmission, accumulation and receivers using hardware, and can be implemented by firmware stored in a read-only memory (ROM), a flash memory, or similar , or computer software, or the like. A firmware program and software program can be registered on a computer-readable registration medium and provided, can be provided from a server via a wired or wireless network, or can be provided as data broadcast by digital broadcasting terrestrial or satellite. [00299] The embodiment of the present invention has been described above. The modality is an example, and one skilled in the art can understand that several modifications or changes in a combination of respective constituent components and processing processes can be made and such modifications or changes are made. Petition 870190021217, of 03/01/2019, p. 152/536 149/150 within the scope of the invention. DESCRIPTION OF REFERENCE NUMBERS 101: Image memory 117: Header information adjustment unit 102: Motion vector detector 103: Motion vector difference derivation unit 104: Interprisation information derivation unit 105: Motion-compensated forecasting unit 106: Intra-forecast unit 107: Forecast method determination unit 108: Residual signal building unit 109: Quantization and orthogonal transformation unit 118: First bitstream build unit 110: Second bit stream construction unit 111: Third bitstream build unit 112: Multiplexer 113: Inverse orthogonal transformation and decanting unit 114: Decoded image signal overlay unit 115: Encoding information storage memory 116: Decoded image memory 130: Merge candidate list building unit 131: Spatial merge candidate construction unit 132: Time merge candidate benchmark derivation unit 133: Time merge candidate derivation unit 134: Additional merge candidate derivation unit 135: Valid merge candidate supplementation unit 136: Merge candidate limiting unit 137: Interprisation information selection unit 201: Demultiplexer Petition 870190021217, of 03/01/2019, p. 153/536 150/150 212: First bit stream decoder 202: Second bit stream decoder 203: Third bit stream decoder 204: Motion vector branch unit 205: Interprisation information derivation unit 206: Motion-compensated forecasting unit 207: Intra-forecast unit 208: Inverse orthogonal transformation and decanting unit 209: Decoding image signal overlay unit 210: Encoding information storage memory 211: Decoded image memory 230: Merge candidate list building unit 231: Spatial merge candidate construction unit 232: Time merge candidate benchmark derivation unit 233: Time merge candidate derivation unit 234: Additional merge candidate derivation unit 235: Valid merge candidate supplementation unit 236: Merge candidate limiting unit 237: Interprisation information selection unit INDUSTRIAL APPLICABILITY [00300] The present invention can be used for encoding and decoding techniques
权利要求:
Claims (3) [1] 1. Moving image decoding device that decodes a bit stream obtained by encoding moving images, using interpretation based on interpreting information from a merge candidate into units of blocks obtained by dividing each image of the moving images , characterized by the fact that it comprises: a forecast information decoding unit (202) which decodes information indicating a previously designated number of merge candidates; a forecast information derivation unit (104) that derives merge candidates from interpretation information, from a neighboring forecast block to a decoding target forecast block or a forecast block present in the same position as or next to the target decoding forecast block in an image decoded in a position that is temporally different from the target decoding forecast block; a candidate list building unit (120, 220) that builds a list of merge candidates from the derived merge candidates; a candidate supplementation unit (135, 235) that repeatedly adds to the list of merge candidates the merge candidates, of which a forecast mode has the same value, of which a reference index has the same value and of which one motion vector has the same value, until a number of merge candidates included in the merge candidate list reaches the designated number of merge candidates when the number of merge candidates included in the constructed merge candidate list is less than the designated number of merge candidates; and Petition 870190021217, of 03/01/2019, p. 155/536 [2] 2/3 a motion-compensated forecast unit (105, 205) that selects a merge candidate from the merge candidates included in the merge candidate list and performs interpreting on the target decoding forecast block using candidate interpreting information merge selected. 2. Moving image decoding method of decoding a bit stream obtained by encoding moving images using interpreter based on interpreter information from a merge candidate into units of blocks obtained by dividing each image of the moving images, characterized by the fact that it comprises: a prediction information decoding step to decode information indicating a previously designated number of merge candidates; a step of deriving forecast information from deriving the merge candidates from the forecast information of a forecast block neighboring a decoding target forecast block or a forecast block present in the same position as or close to the target decoding forecast in an image decoded in a position that is temporally different from the target decoding forecast block; a step of building a candidate list to build a list of merge candidates from the derived merge candidates; a candidate supplementation step of repeatedly adding to the list of merge candidates the merge candidates, of which a forecast mode has the same value, of which a reference index has the same value and of which a motion vector has a same value, until a number of merge candidates are included in the list of candidates for Petition 870190021217, of 03/01/2019, p. 156/536 [3] 3/3 merge reach the designated number of merge candidates when the number of merge candidates included in the list of merge candidates built is less than the designated number of merge candidates; and a motion-compensated forecasting step of selecting a merge candidate from the merge candidates included in the merge candidate list and performing an interpretation on the target decoding forecast block using the selected merge candidate's interpretation information.
类似技术:
公开号 | 公开日 | 专利标题 BR112014024294B1|2020-02-11|MOVEMENT IMAGE DECODING DEVICE AND MOVEMENT IMAGE DECODING METHOD JP6206558B2|2017-10-04|Moving picture decoding apparatus, moving picture decoding method, moving picture decoding program, receiving apparatus, receiving method, and receiving program JP6020763B1|2016-11-02|Moving picture coding apparatus, moving picture coding method, moving picture coding program, transmission apparatus, transmission method, and transmission program TWI597973B|2017-09-01|Video encoding device, video encoding method and recording medium TW201332375A|2013-08-01|Dynamic image encoding device, dynamic image encoding method, dynamic image encoding program, dynamic image decoding device, dynamic image decoding method, and dynamic image decoding program JP6418300B2|2018-11-07|Moving picture decoding apparatus, moving picture decoding method, moving picture decoding program, receiving apparatus, receiving method, and receiving program JP6644270B2|2020-02-12|Moving picture coding apparatus, moving picture coding method, moving picture coding program, transmitting apparatus, transmitting method and transmitting program JP2013098733A|2013-05-20|Moving image decoder, moving image decoding method and moving image decoding program JP2013102260A|2013-05-23|Moving image decoder, moving image decoding method and moving image decoding program
同族专利:
公开号 | 公开日 PL2838261T3|2020-11-30| EP3833024A1|2021-06-09| EP3833023A1|2021-06-09| HUE050755T2|2021-01-28| US20200137410A1|2020-04-30| TWI578760B|2017-04-11| RU2708360C9|2019-12-17| TW201717620A|2017-05-16| EP3716621B1|2021-12-01| DK3716621T3|2021-12-06| RU2617920C9|2017-08-29| US9872037B2|2018-01-16| US20160323575A1|2016-11-03| US9918103B2|2018-03-13| US20170223376A1|2017-08-03| EP2838261A4|2015-11-25| TW201717619A|2017-05-16| US20220078471A1|2022-03-10| TW201717621A|2017-05-16| US20210021854A1|2021-01-21| SI2838261T1|2020-08-31| RU2708359C1|2019-12-05| EP3716621A1|2020-09-30| TWI586145B|2017-06-01| TW201717622A|2017-05-16| US9681134B2|2017-06-13| RU2708359C9|2019-12-17| EP3833022A1|2021-06-09| TWI575935B|2017-03-21| EP2838261A1|2015-02-18| US20170223374A1|2017-08-03| US20140376638A1|2014-12-25| US10791336B2|2020-09-29| US20170223375A1|2017-08-03| SI3716621T1|2022-01-31| RU2658146C1|2018-06-19| RU2617920C2|2017-04-28| US9872038B2|2018-01-16| RU2682368C1|2019-03-19| US10230975B2|2019-03-12| US10523962B2|2019-12-31| RU2658146C9|2018-12-12| US9667966B2|2017-05-30| RU2014145257A|2016-06-10| RU2708360C1|2019-12-05| TWI578759B|2017-04-11| WO2013153823A1|2013-10-17| TWI580248B|2017-04-21| US11206421B2|2021-12-21| RU2682368C9|2019-10-01| DK2838261T3|2020-07-06| PL3716621T3|2022-01-24| US20180131957A1|2018-05-10| TW201349873A|2013-12-01| US20190191176A1|2019-06-20| EP2838261B1|2020-06-17|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 KR100399932B1|2001-05-07|2003-09-29|주식회사 하이닉스반도체|Video frame compression/decompression hardware system for reducing amount of memory| US20040001546A1|2002-06-03|2004-01-01|Alexandros Tourapis|Spatiotemporal prediction for bidirectionally predictive pictures and motion vector prediction for multi-picture reference motion compensation| KR100506864B1|2002-10-04|2005-08-05|엘지전자 주식회사|Method of determining motion vector| US7400681B2|2003-11-28|2008-07-15|Scientific-Atlanta, Inc.|Low-complexity motion vector prediction for video codec with two lists of reference pictures| US7453938B2|2004-02-06|2008-11-18|Apple Inc.|Target bitrate estimator, picture activity and buffer management in rate control for video coder| US9113147B2|2005-09-27|2015-08-18|Qualcomm Incorporated|Scalability techniques based on content information| EP2079242A4|2006-10-30|2010-11-03|Nippon Telegraph & Telephone|Predictive reference information generation method, dynamic image encoding and decoding method, device thereof, program thereof, and storage medium containing the program| JP4821723B2|2007-07-13|2011-11-24|富士通株式会社|Moving picture coding apparatus and program| JP2009164880A|2008-01-07|2009-07-23|Mitsubishi Electric Corp|Transcoder and receiver| JP4600574B2|2009-01-07|2010-12-15|日本電気株式会社|Moving picture decoding apparatus, moving picture decoding method, and program| TWI387314B|2009-03-10|2013-02-21|Univ Nat Central|Image processing apparatus and method thereof| US8934548B2|2009-05-29|2015-01-13|Mitsubishi Electric Corporation|Image encoding device, image decoding device, image encoding method, and image decoding method| WO2011108852A2|2010-03-02|2011-09-09|Samsung Electronics Co., Ltd.|Method and apparatus for adaptive streaming using scalable video coding scheme| WO2012008506A1|2010-07-15|2012-01-19|シャープ株式会社|Image filter device, decoding device, video decoding device, encoding device, video encoding device, and data structure| PL2613535T3|2010-09-02|2021-10-25|Lg Electronics Inc.|Method for encoding and decoding video| US8861617B2|2010-10-05|2014-10-14|Mediatek Inc|Method and apparatus of region-based adaptive loop filtering| KR102277273B1|2010-10-08|2021-07-15|지이 비디오 컴프레션, 엘엘씨|Picture coding supporting block partitioning and block merging| US9485518B2|2011-05-27|2016-11-01|Sun Patent Trust|Decoding method and apparatus with candidate motion vectors| MX2013012132A|2011-05-27|2013-10-30|Panasonic Corp|Image encoding method, image encoding device, image decoding method, image decoding device, and image encoding/decoding device.| KR101889582B1|2011-05-31|2018-08-20|선 페이턴트 트러스트|Video encoding method, video encoding device, video decoding method, video decoding device, and video encoding/decoding device| US9866859B2|2011-06-14|2018-01-09|Texas Instruments Incorporated|Inter-prediction candidate index coding independent of inter-prediction candidate list construction in video coding| JP5488666B2|2011-09-28|2014-05-14|株式会社Jvcケンウッド|Moving picture decoding apparatus, moving picture decoding method, moving picture decoding program, receiving apparatus, receiving method, and receiving program| MX2014003991A|2011-10-19|2014-05-07|Panasonic Corp|Image encoding method, image encoding device, image decoding method, and image decoding device.| WO2013108613A1|2012-01-17|2013-07-25|パナソニック株式会社|Moving picture encoding method, moving picture decoding method, moving picture encoding device, moving picture decoding device and moving picture encoding/decoding device| DK3767952T3|2012-01-19|2021-12-20|Electronics & Telecommunications Res Inst|IMAGE CODING / DECODING DEVICE| PL3716621T3|2012-04-12|2022-01-24|Jvckenwood Corporation|Moving picture coding device, moving picture coding method, moving picture coding program, and moving picture decoding device, moving picture decoding method, moving picture decoding program|WO2011145819A2|2010-05-19|2011-11-24|에스케이텔레콤 주식회사|Image encoding/decoding device and method| WO2012081879A1|2010-12-14|2012-06-21|Oh Soo Mi|Method for decoding inter predictive encoded motion pictures| PL3716621T3|2012-04-12|2022-01-24|Jvckenwood Corporation|Moving picture coding device, moving picture coding method, moving picture coding program, and moving picture decoding device, moving picture decoding method, moving picture decoding program| US10205950B2|2014-02-21|2019-02-12|Panasonic Corporation|Image decoding method, image encoding method, image decoding apparatus, and image encoding apparatus| CN107113440B|2014-10-31|2020-10-13|三星电子株式会社|Video decoding method executed by video decoding device| JP6678357B2|2015-03-31|2020-04-08|リアルネットワークス,インコーポレーテッド|Motion vector selection and prediction method in video coding system| US10638129B2|2015-04-27|2020-04-28|Lg Electronics Inc.|Method for processing video signal and device for same| US10462479B2|2015-07-10|2019-10-29|Nec Corporation|Motion picture encoding device, motion picture encoding method, and storage medium storing motion picture encoding program| CN108432252A|2015-12-22|2018-08-21|真实网络公司|Motion vector selection and forecasting system in Video coding and method| EP3590259A4|2017-02-23|2020-08-19|RealNetworks, Inc.|Coding block bitstream structure and syntax in video coding systems and methods| CN110419217A|2018-04-02|2019-11-05|深圳市大疆创新科技有限公司|Method and image processing apparatus for image procossing| JP2021530936A|2018-06-29|2021-11-11|北京字節跳動網絡技術有限公司Beijing Bytedance Network Technology Co., Ltd.|Look-up table updates: FIFO, restricted FIFO| EP3794824A1|2018-06-29|2021-03-24|Beijing Bytedance Network Technology Co. Ltd.|Conditions for updating luts| WO2020003282A1|2018-06-29|2020-01-02|Beijing Bytedance Network Technology Co., Ltd.|Managing motion vector predictors for video coding| CN110662064A|2018-06-29|2020-01-07|北京字节跳动网络技术有限公司|Checking order of motion candidates in LUT| GB2588528A|2018-06-29|2021-04-28|Beijing Bytedance Network Tech Co Ltd|Selection of coded motion information for LUT updating| TWI735902B|2018-07-02|2021-08-11|大陸商北京字節跳動網絡技術有限公司|Lookup table with intra frame prediction and intra frame predication from non adjacent blocks| US10924731B2|2018-08-28|2021-02-16|Tencent America LLC|Complexity constraints on merge candidates list construction| CN110868601A|2018-08-28|2020-03-06|华为技术有限公司|Inter-frame prediction method and device, video encoder and video decoder| TW202025760A|2018-09-12|2020-07-01|大陸商北京字節跳動網絡技術有限公司|How many hmvp candidates to be checked| CN112236996A|2018-12-21|2021-01-15|株式会社 Xris|Video signal encoding/decoding method and apparatus thereof| CN113383554A|2019-01-13|2021-09-10|北京字节跳动网络技术有限公司|Interaction between LUTs and shared Merge lists|
法律状态:
2018-03-27| B15K| Others concerning applications: alteration of classification|Ipc: H04N 7/00 (2011.01) | 2018-12-04| B06F| Objections, documents and/or translations needed after an examination request according [chapter 6.6 patent gazette]| 2019-12-10| B09A| Decision: intention to grant [chapter 9.1 patent gazette]| 2019-12-10| B15K| Others concerning applications: alteration of classification|Free format text: A CLASSIFICACAO ANTERIOR ERA: H04N 7/00 Ipc: H04N 19/105 (2014.01), H04N 19/176 (2014.01), H04N | 2020-02-11| B16A| Patent or certificate of addition of invention granted [chapter 16.1 patent gazette]|Free format text: PRAZO DE VALIDADE: 20 (VINTE) ANOS CONTADOS A PARTIR DE 12/04/2013, OBSERVADAS AS CONDICOES LEGAIS. |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 JP2012091385|2012-04-12| JP2012091386|2012-04-12| JP2012-091385|2012-04-12| JP2012-091386|2012-04-12| JP2013083577A|JP6020323B2|2012-04-12|2013-04-12|Moving picture coding apparatus, moving picture coding method, moving picture coding program, transmission apparatus, transmission method, and transmission program| JP2013-083577|2013-04-12| PCT/JP2013/002513|WO2013153823A1|2012-04-12|2013-04-12|Video encoding device, video encoding method, video encoding program, transmission device, transmission method, and transmission program, and video decoding device, video decoding method, video decoding program, receiving device, receiving method, and receiving program| JP2013-083578|2013-04-12| JP2013083578A|JP5633597B2|2012-04-12|2013-04-12|Moving picture decoding apparatus, moving picture decoding method, moving picture decoding program, receiving apparatus, receiving method, and receiving program| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|